00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 841 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3506 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.227 > git --version # 'git version 2.39.2' 00:00:00.227 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.254 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.254 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.488 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.498 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.509 Checking out Revision 4f3f5a4a30726c4eea24a2c31f6bdf50c75a515d (FETCH_HEAD) 00:00:04.509 > git config core.sparsecheckout # timeout=10 00:00:04.521 > git read-tree -mu HEAD # timeout=10 00:00:04.537 > git checkout -f 4f3f5a4a30726c4eea24a2c31f6bdf50c75a515d # timeout=5 00:00:04.557 Commit message: "jenkins/jjb-config: Adjust vs-dpdk config for v24.09" 00:00:04.558 > git rev-list --no-walk 4f3f5a4a30726c4eea24a2c31f6bdf50c75a515d # timeout=10 00:00:04.649 [Pipeline] Start of Pipeline 00:00:04.663 [Pipeline] library 00:00:04.665 Loading library shm_lib@master 00:00:04.665 Library shm_lib@master is cached. Copying from home. 00:00:04.680 [Pipeline] node 00:00:04.692 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.694 [Pipeline] { 00:00:04.704 [Pipeline] catchError 00:00:04.705 [Pipeline] { 00:00:04.716 [Pipeline] wrap 00:00:04.724 [Pipeline] { 00:00:04.729 [Pipeline] stage 00:00:04.730 [Pipeline] { (Prologue) 00:00:04.744 [Pipeline] echo 00:00:04.745 Node: VM-host-SM0 00:00:04.750 [Pipeline] cleanWs 00:00:04.759 [WS-CLEANUP] Deleting project workspace... 00:00:04.759 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.765 [WS-CLEANUP] done 00:00:04.955 [Pipeline] setCustomBuildProperty 00:00:05.040 [Pipeline] httpRequest 00:00:05.455 [Pipeline] echo 00:00:05.456 Sorcerer 10.211.164.101 is alive 00:00:05.464 [Pipeline] retry 00:00:05.466 [Pipeline] { 00:00:05.477 [Pipeline] httpRequest 00:00:05.480 HttpMethod: GET 00:00:05.480 URL: http://10.211.164.101/packages/jbp_4f3f5a4a30726c4eea24a2c31f6bdf50c75a515d.tar.gz 00:00:05.481 Sending request to url: http://10.211.164.101/packages/jbp_4f3f5a4a30726c4eea24a2c31f6bdf50c75a515d.tar.gz 00:00:05.482 Response Code: HTTP/1.1 200 OK 00:00:05.482 Success: Status code 200 is in the accepted range: 200,404 00:00:05.483 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_4f3f5a4a30726c4eea24a2c31f6bdf50c75a515d.tar.gz 00:00:06.236 [Pipeline] } 00:00:06.250 [Pipeline] // retry 00:00:06.256 [Pipeline] sh 00:00:06.535 + tar --no-same-owner -xf jbp_4f3f5a4a30726c4eea24a2c31f6bdf50c75a515d.tar.gz 00:00:06.549 [Pipeline] httpRequest 00:00:07.013 [Pipeline] echo 00:00:07.014 Sorcerer 10.211.164.101 is alive 00:00:07.023 [Pipeline] retry 00:00:07.024 [Pipeline] { 00:00:07.037 [Pipeline] httpRequest 00:00:07.042 HttpMethod: GET 00:00:07.042 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:07.043 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:07.043 Response Code: HTTP/1.1 200 OK 00:00:07.044 Success: Status code 200 is in the accepted range: 200,404 00:00:07.044 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:30.177 [Pipeline] } 00:00:30.195 [Pipeline] // retry 00:00:30.203 [Pipeline] sh 00:00:30.483 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:33.032 [Pipeline] sh 00:00:33.313 + git -C spdk log --oneline -n5 00:00:33.313 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:33.313 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:33.313 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:33.313 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:33.313 9469ea403 nvme/fio_plugin: add trim support 00:00:33.333 [Pipeline] withCredentials 00:00:33.344 > git --version # timeout=10 00:00:33.357 > git --version # 'git version 2.39.2' 00:00:33.373 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:33.375 [Pipeline] { 00:00:33.385 [Pipeline] retry 00:00:33.387 [Pipeline] { 00:00:33.402 [Pipeline] sh 00:00:33.683 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:33.695 [Pipeline] } 00:00:33.715 [Pipeline] // retry 00:00:33.720 [Pipeline] } 00:00:33.737 [Pipeline] // withCredentials 00:00:33.747 [Pipeline] httpRequest 00:00:34.349 [Pipeline] echo 00:00:34.351 Sorcerer 10.211.164.101 is alive 00:00:34.361 [Pipeline] retry 00:00:34.364 [Pipeline] { 00:00:34.378 [Pipeline] httpRequest 00:00:34.383 HttpMethod: GET 00:00:34.383 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:34.384 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:34.389 Response Code: HTTP/1.1 200 OK 00:00:34.389 Success: Status code 200 is in the accepted range: 200,404 00:00:34.390 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.261 [Pipeline] } 00:01:31.283 [Pipeline] // retry 00:01:31.292 [Pipeline] sh 00:01:31.578 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.967 [Pipeline] sh 00:01:33.249 + git -C dpdk log --oneline -n5 00:01:33.249 caf0f5d395 version: 22.11.4 00:01:33.249 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:33.249 dc9c799c7d vhost: fix missing spinlock unlock 00:01:33.249 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:33.249 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:33.267 [Pipeline] writeFile 00:01:33.282 [Pipeline] sh 00:01:33.564 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:33.576 [Pipeline] sh 00:01:33.857 + cat autorun-spdk.conf 00:01:33.857 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.857 SPDK_TEST_NVMF=1 00:01:33.857 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.857 SPDK_TEST_USDT=1 00:01:33.857 SPDK_RUN_UBSAN=1 00:01:33.857 SPDK_TEST_NVMF_MDNS=1 00:01:33.857 NET_TYPE=virt 00:01:33.857 SPDK_JSONRPC_GO_CLIENT=1 00:01:33.857 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:33.857 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:33.857 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.865 RUN_NIGHTLY=1 00:01:33.867 [Pipeline] } 00:01:33.881 [Pipeline] // stage 00:01:33.896 [Pipeline] stage 00:01:33.899 [Pipeline] { (Run VM) 00:01:33.912 [Pipeline] sh 00:01:34.193 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:34.193 + echo 'Start stage prepare_nvme.sh' 00:01:34.193 Start stage prepare_nvme.sh 00:01:34.193 + [[ -n 5 ]] 00:01:34.193 + disk_prefix=ex5 00:01:34.193 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:34.193 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:34.193 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:34.193 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.193 ++ SPDK_TEST_NVMF=1 00:01:34.193 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.193 ++ SPDK_TEST_USDT=1 00:01:34.193 ++ SPDK_RUN_UBSAN=1 00:01:34.193 ++ SPDK_TEST_NVMF_MDNS=1 00:01:34.193 ++ NET_TYPE=virt 00:01:34.193 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:34.193 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:34.193 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:34.193 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.193 ++ RUN_NIGHTLY=1 00:01:34.193 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:34.193 + nvme_files=() 00:01:34.193 + declare -A nvme_files 00:01:34.193 + backend_dir=/var/lib/libvirt/images/backends 00:01:34.193 + nvme_files['nvme.img']=5G 00:01:34.193 + nvme_files['nvme-cmb.img']=5G 00:01:34.193 + nvme_files['nvme-multi0.img']=4G 00:01:34.193 + nvme_files['nvme-multi1.img']=4G 00:01:34.193 + nvme_files['nvme-multi2.img']=4G 00:01:34.193 + nvme_files['nvme-openstack.img']=8G 00:01:34.193 + nvme_files['nvme-zns.img']=5G 00:01:34.193 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:34.193 + (( SPDK_TEST_FTL == 1 )) 00:01:34.193 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:34.193 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:34.193 + for nvme in "${!nvme_files[@]}" 00:01:34.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:34.193 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.193 + for nvme in "${!nvme_files[@]}" 00:01:34.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:34.193 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.193 + for nvme in "${!nvme_files[@]}" 00:01:34.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:34.193 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:34.193 + for nvme in "${!nvme_files[@]}" 00:01:34.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:34.193 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.193 + for nvme in "${!nvme_files[@]}" 00:01:34.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:34.193 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.193 + for nvme in "${!nvme_files[@]}" 00:01:34.193 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:34.452 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.452 + for nvme in "${!nvme_files[@]}" 00:01:34.452 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:34.452 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.452 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:34.452 + echo 'End stage prepare_nvme.sh' 00:01:34.452 End stage prepare_nvme.sh 00:01:34.464 [Pipeline] sh 00:01:34.745 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:34.745 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:34.745 00:01:34.745 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:34.745 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:34.745 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:34.745 HELP=0 00:01:34.745 DRY_RUN=0 00:01:34.745 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:34.745 NVME_DISKS_TYPE=nvme,nvme, 00:01:34.745 NVME_AUTO_CREATE=0 00:01:34.745 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:34.745 NVME_CMB=,, 00:01:34.745 NVME_PMR=,, 00:01:34.745 NVME_ZNS=,, 00:01:34.745 NVME_MS=,, 00:01:34.745 NVME_FDP=,, 00:01:34.745 SPDK_VAGRANT_DISTRO=fedora39 00:01:34.745 SPDK_VAGRANT_VMCPU=10 00:01:34.745 SPDK_VAGRANT_VMRAM=12288 00:01:34.745 SPDK_VAGRANT_PROVIDER=libvirt 00:01:34.745 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:34.745 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:34.745 SPDK_OPENSTACK_NETWORK=0 00:01:34.745 VAGRANT_PACKAGE_BOX=0 00:01:34.745 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:34.745 FORCE_DISTRO=true 00:01:34.745 VAGRANT_BOX_VERSION= 00:01:34.745 EXTRA_VAGRANTFILES= 00:01:34.745 NIC_MODEL=e1000 00:01:34.745 00:01:34.745 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:34.745 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:38.034 Bringing machine 'default' up with 'libvirt' provider... 00:01:38.294 ==> default: Creating image (snapshot of base box volume). 00:01:38.554 ==> default: Creating domain with the following settings... 00:01:38.554 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728022770_eee3b0c65d9d1ada6db4 00:01:38.554 ==> default: -- Domain type: kvm 00:01:38.554 ==> default: -- Cpus: 10 00:01:38.554 ==> default: -- Feature: acpi 00:01:38.554 ==> default: -- Feature: apic 00:01:38.554 ==> default: -- Feature: pae 00:01:38.554 ==> default: -- Memory: 12288M 00:01:38.554 ==> default: -- Memory Backing: hugepages: 00:01:38.554 ==> default: -- Management MAC: 00:01:38.554 ==> default: -- Loader: 00:01:38.554 ==> default: -- Nvram: 00:01:38.554 ==> default: -- Base box: spdk/fedora39 00:01:38.554 ==> default: -- Storage pool: default 00:01:38.554 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728022770_eee3b0c65d9d1ada6db4.img (20G) 00:01:38.554 ==> default: -- Volume Cache: default 00:01:38.554 ==> default: -- Kernel: 00:01:38.554 ==> default: -- Initrd: 00:01:38.554 ==> default: -- Graphics Type: vnc 00:01:38.554 ==> default: -- Graphics Port: -1 00:01:38.554 ==> default: -- Graphics IP: 127.0.0.1 00:01:38.554 ==> default: -- Graphics Password: Not defined 00:01:38.554 ==> default: -- Video Type: cirrus 00:01:38.554 ==> default: -- Video VRAM: 9216 00:01:38.554 ==> default: -- Sound Type: 00:01:38.554 ==> default: -- Keymap: en-us 00:01:38.554 ==> default: -- TPM Path: 00:01:38.554 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:38.554 ==> default: -- Command line args: 00:01:38.554 ==> default: -> value=-device, 00:01:38.554 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:38.554 ==> default: -> value=-drive, 00:01:38.554 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:38.554 ==> default: -> value=-device, 00:01:38.554 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.554 ==> default: -> value=-device, 00:01:38.554 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:38.554 ==> default: -> value=-drive, 00:01:38.554 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:38.554 ==> default: -> value=-device, 00:01:38.554 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.554 ==> default: -> value=-drive, 00:01:38.554 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:38.554 ==> default: -> value=-device, 00:01:38.554 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.554 ==> default: -> value=-drive, 00:01:38.554 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:38.554 ==> default: -> value=-device, 00:01:38.554 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.814 ==> default: Creating shared folders metadata... 00:01:38.814 ==> default: Starting domain. 00:01:40.194 ==> default: Waiting for domain to get an IP address... 00:01:58.286 ==> default: Waiting for SSH to become available... 00:01:58.286 ==> default: Configuring and enabling network interfaces... 00:02:01.618 default: SSH address: 192.168.121.68:22 00:02:01.618 default: SSH username: vagrant 00:02:01.618 default: SSH auth method: private key 00:02:04.175 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.733 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:17.293 ==> default: Mounting SSHFS shared folder... 00:02:18.669 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.669 ==> default: Checking Mount.. 00:02:20.046 ==> default: Folder Successfully Mounted! 00:02:20.046 ==> default: Running provisioner: file... 00:02:20.614 default: ~/.gitconfig => .gitconfig 00:02:21.181 00:02:21.181 SUCCESS! 00:02:21.181 00:02:21.181 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:21.181 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:21.181 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:21.181 00:02:21.191 [Pipeline] } 00:02:21.208 [Pipeline] // stage 00:02:21.219 [Pipeline] dir 00:02:21.219 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:21.221 [Pipeline] { 00:02:21.235 [Pipeline] catchError 00:02:21.236 [Pipeline] { 00:02:21.250 [Pipeline] sh 00:02:21.530 + vagrant ssh-config --host vagrant 00:02:21.530 + sed -ne /^Host/,$p 00:02:21.530 + tee ssh_conf 00:02:24.817 Host vagrant 00:02:24.817 HostName 192.168.121.68 00:02:24.817 User vagrant 00:02:24.817 Port 22 00:02:24.817 UserKnownHostsFile /dev/null 00:02:24.817 StrictHostKeyChecking no 00:02:24.817 PasswordAuthentication no 00:02:24.817 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:24.817 IdentitiesOnly yes 00:02:24.817 LogLevel FATAL 00:02:24.817 ForwardAgent yes 00:02:24.817 ForwardX11 yes 00:02:24.817 00:02:24.831 [Pipeline] withEnv 00:02:24.833 [Pipeline] { 00:02:24.847 [Pipeline] sh 00:02:25.129 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:25.129 source /etc/os-release 00:02:25.129 [[ -e /image.version ]] && img=$(< /image.version) 00:02:25.129 # Minimal, systemd-like check. 00:02:25.129 if [[ -e /.dockerenv ]]; then 00:02:25.129 # Clear garbage from the node's name: 00:02:25.129 # agt-er_autotest_547-896 -> autotest_547-896 00:02:25.129 # $HOSTNAME is the actual container id 00:02:25.129 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:25.129 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:25.129 # We can assume this is a mount from a host where container is running, 00:02:25.129 # so fetch its hostname to easily identify the target swarm worker. 00:02:25.129 container="$(< /etc/hostname) ($agent)" 00:02:25.129 else 00:02:25.129 # Fallback 00:02:25.129 container=$agent 00:02:25.129 fi 00:02:25.129 fi 00:02:25.129 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:25.129 00:02:25.399 [Pipeline] } 00:02:25.415 [Pipeline] // withEnv 00:02:25.424 [Pipeline] setCustomBuildProperty 00:02:25.439 [Pipeline] stage 00:02:25.441 [Pipeline] { (Tests) 00:02:25.458 [Pipeline] sh 00:02:25.779 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:25.802 [Pipeline] sh 00:02:26.081 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:26.354 [Pipeline] timeout 00:02:26.355 Timeout set to expire in 1 hr 0 min 00:02:26.357 [Pipeline] { 00:02:26.373 [Pipeline] sh 00:02:26.652 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:27.219 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:02:27.227 [Pipeline] sh 00:02:27.500 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:27.772 [Pipeline] sh 00:02:28.051 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:28.328 [Pipeline] sh 00:02:28.608 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:28.868 ++ readlink -f spdk_repo 00:02:28.868 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:28.868 + [[ -n /home/vagrant/spdk_repo ]] 00:02:28.868 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:28.868 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:28.868 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:28.868 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:28.868 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:28.868 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:28.868 + cd /home/vagrant/spdk_repo 00:02:28.868 + source /etc/os-release 00:02:28.868 ++ NAME='Fedora Linux' 00:02:28.868 ++ VERSION='39 (Cloud Edition)' 00:02:28.868 ++ ID=fedora 00:02:28.868 ++ VERSION_ID=39 00:02:28.868 ++ VERSION_CODENAME= 00:02:28.868 ++ PLATFORM_ID=platform:f39 00:02:28.868 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:28.868 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:28.868 ++ LOGO=fedora-logo-icon 00:02:28.868 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:28.868 ++ HOME_URL=https://fedoraproject.org/ 00:02:28.868 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:28.868 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:28.868 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:28.868 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:28.868 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:28.868 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:28.868 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:28.868 ++ SUPPORT_END=2024-11-12 00:02:28.868 ++ VARIANT='Cloud Edition' 00:02:28.868 ++ VARIANT_ID=cloud 00:02:28.868 + uname -a 00:02:28.868 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:28.868 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:28.868 Hugepages 00:02:28.868 node hugesize free / total 00:02:28.868 node0 1048576kB 0 / 0 00:02:28.868 node0 2048kB 0 / 0 00:02:28.868 00:02:28.868 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:28.868 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:28.868 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:28.868 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:28.868 + rm -f /tmp/spdk-ld-path 00:02:28.868 + source autorun-spdk.conf 00:02:28.868 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.868 ++ SPDK_TEST_NVMF=1 00:02:28.868 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.868 ++ SPDK_TEST_USDT=1 00:02:28.868 ++ SPDK_RUN_UBSAN=1 00:02:28.868 ++ SPDK_TEST_NVMF_MDNS=1 00:02:28.868 ++ NET_TYPE=virt 00:02:28.868 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:28.868 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:28.868 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:28.868 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.869 ++ RUN_NIGHTLY=1 00:02:28.869 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:28.869 + [[ -n '' ]] 00:02:28.869 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:29.129 + for M in /var/spdk/build-*-manifest.txt 00:02:29.129 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:29.129 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.129 + for M in /var/spdk/build-*-manifest.txt 00:02:29.129 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:29.129 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.129 + for M in /var/spdk/build-*-manifest.txt 00:02:29.129 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:29.129 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:29.129 ++ uname 00:02:29.129 + [[ Linux == \L\i\n\u\x ]] 00:02:29.129 + sudo dmesg -T 00:02:29.129 + sudo dmesg --clear 00:02:29.129 + dmesg_pid=5974 00:02:29.129 + [[ Fedora Linux == FreeBSD ]] 00:02:29.129 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.129 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:29.129 + sudo dmesg -Tw 00:02:29.129 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:29.129 + [[ -x /usr/src/fio-static/fio ]] 00:02:29.129 + export FIO_BIN=/usr/src/fio-static/fio 00:02:29.129 + FIO_BIN=/usr/src/fio-static/fio 00:02:29.129 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:29.129 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:29.129 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:29.129 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.129 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:29.129 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:29.129 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.129 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:29.129 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:29.129 Test configuration: 00:02:29.129 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:29.129 SPDK_TEST_NVMF=1 00:02:29.129 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:29.129 SPDK_TEST_USDT=1 00:02:29.129 SPDK_RUN_UBSAN=1 00:02:29.129 SPDK_TEST_NVMF_MDNS=1 00:02:29.129 NET_TYPE=virt 00:02:29.129 SPDK_JSONRPC_GO_CLIENT=1 00:02:29.129 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:29.130 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:29.130 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:29.130 RUN_NIGHTLY=1 06:20:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:29.130 06:20:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:29.130 06:20:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.130 06:20:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.130 06:20:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.130 06:20:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.130 06:20:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.130 06:20:21 -- paths/export.sh@5 -- $ export PATH 00:02:29.130 06:20:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.130 06:20:21 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:29.130 06:20:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:29.130 06:20:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728022821.XXXXXX 00:02:29.130 06:20:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728022821.fZJ99T 00:02:29.130 06:20:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:29.130 06:20:21 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:29.130 06:20:21 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:29.130 06:20:21 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:29.130 06:20:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:29.130 06:20:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:29.130 06:20:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:29.130 06:20:21 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:29.130 06:20:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.130 06:20:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:29.130 06:20:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:29.130 06:20:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:29.130 06:20:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:29.130 06:20:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:29.130 Fri Oct 4 06:20:21 AM UTC 2024 00:02:29.130 06:20:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:29.390 LTS-66-g726a04d70 00:02:29.390 06:20:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:29.390 06:20:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:29.390 06:20:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:29.390 06:20:21 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:29.390 06:20:21 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:29.390 06:20:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.390 ************************************ 00:02:29.390 START TEST ubsan 00:02:29.390 ************************************ 00:02:29.390 using ubsan 00:02:29.390 06:20:21 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:29.390 00:02:29.390 real 0m0.000s 00:02:29.390 user 0m0.000s 00:02:29.390 sys 0m0.000s 00:02:29.390 06:20:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.390 06:20:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.390 ************************************ 00:02:29.390 END TEST ubsan 00:02:29.390 ************************************ 00:02:29.390 06:20:21 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:29.390 06:20:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:29.390 06:20:21 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:29.390 06:20:21 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:29.390 06:20:21 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:29.390 06:20:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.390 ************************************ 00:02:29.390 START TEST build_native_dpdk 00:02:29.390 ************************************ 00:02:29.390 06:20:21 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:29.390 06:20:21 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:29.390 06:20:21 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:29.390 06:20:21 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:29.390 06:20:21 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:29.390 06:20:21 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:29.390 06:20:21 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:29.390 06:20:21 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:29.390 06:20:21 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:29.390 06:20:21 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:29.390 06:20:21 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:29.390 06:20:21 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:29.390 06:20:21 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:29.390 06:20:21 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:29.390 06:20:21 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:29.390 06:20:21 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:29.390 06:20:21 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:29.390 06:20:21 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:29.390 caf0f5d395 version: 22.11.4 00:02:29.390 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:29.390 dc9c799c7d vhost: fix missing spinlock unlock 00:02:29.390 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:29.390 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:29.390 06:20:21 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:29.390 06:20:21 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:29.390 06:20:21 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:29.390 06:20:21 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:29.390 06:20:21 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:29.390 06:20:21 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:29.390 06:20:21 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:29.390 06:20:21 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:29.390 06:20:21 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:29.390 06:20:21 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:29.390 06:20:21 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:29.390 06:20:21 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:29.390 06:20:21 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:29.390 06:20:21 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:29.390 06:20:21 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:29.390 06:20:21 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:29.390 06:20:21 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:29.390 06:20:21 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:29.390 06:20:21 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:29.390 06:20:21 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:29.390 06:20:21 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:29.390 06:20:21 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:29.391 06:20:21 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:29.391 06:20:21 -- scripts/common.sh@343 -- $ case "$op" in 00:02:29.391 06:20:21 -- scripts/common.sh@344 -- $ : 1 00:02:29.391 06:20:21 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:29.391 06:20:21 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.391 06:20:21 -- scripts/common.sh@364 -- $ decimal 22 00:02:29.391 06:20:21 -- scripts/common.sh@352 -- $ local d=22 00:02:29.391 06:20:21 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:29.391 06:20:21 -- scripts/common.sh@354 -- $ echo 22 00:02:29.391 06:20:21 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:29.391 06:20:21 -- scripts/common.sh@365 -- $ decimal 21 00:02:29.391 06:20:21 -- scripts/common.sh@352 -- $ local d=21 00:02:29.391 06:20:21 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:29.391 06:20:21 -- scripts/common.sh@354 -- $ echo 21 00:02:29.391 06:20:21 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:29.391 06:20:21 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:29.391 06:20:21 -- scripts/common.sh@366 -- $ return 1 00:02:29.391 06:20:21 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:29.391 patching file config/rte_config.h 00:02:29.391 Hunk #1 succeeded at 60 (offset 1 line). 00:02:29.391 06:20:21 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:29.391 06:20:21 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:29.391 06:20:21 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:29.391 06:20:21 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:29.391 06:20:21 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:29.391 06:20:21 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:29.391 06:20:21 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:29.391 06:20:21 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:29.391 06:20:21 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:29.391 06:20:21 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:29.391 06:20:21 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:29.391 06:20:21 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:29.391 06:20:21 -- scripts/common.sh@343 -- $ case "$op" in 00:02:29.391 06:20:21 -- scripts/common.sh@344 -- $ : 1 00:02:29.391 06:20:21 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:29.391 06:20:21 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.391 06:20:21 -- scripts/common.sh@364 -- $ decimal 22 00:02:29.391 06:20:21 -- scripts/common.sh@352 -- $ local d=22 00:02:29.391 06:20:21 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:29.391 06:20:21 -- scripts/common.sh@354 -- $ echo 22 00:02:29.391 06:20:21 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:29.391 06:20:21 -- scripts/common.sh@365 -- $ decimal 24 00:02:29.391 06:20:21 -- scripts/common.sh@352 -- $ local d=24 00:02:29.391 06:20:21 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:29.391 06:20:21 -- scripts/common.sh@354 -- $ echo 24 00:02:29.391 06:20:21 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:29.391 06:20:21 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:29.391 06:20:21 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:29.391 06:20:21 -- scripts/common.sh@367 -- $ return 0 00:02:29.391 06:20:21 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:29.391 patching file lib/pcapng/rte_pcapng.c 00:02:29.391 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:29.391 06:20:21 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:29.391 06:20:21 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:29.391 06:20:21 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:29.391 06:20:21 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:29.391 06:20:21 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.661 The Meson build system 00:02:34.661 Version: 1.5.0 00:02:34.661 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:34.661 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:34.661 Build type: native build 00:02:34.661 Program cat found: YES (/usr/bin/cat) 00:02:34.661 Project name: DPDK 00:02:34.661 Project version: 22.11.4 00:02:34.661 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.661 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:34.661 Host machine cpu family: x86_64 00:02:34.661 Host machine cpu: x86_64 00:02:34.661 Message: ## Building in Developer Mode ## 00:02:34.661 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.661 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:34.661 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.661 Program objdump found: YES (/usr/bin/objdump) 00:02:34.661 Program python3 found: YES (/usr/bin/python3) 00:02:34.661 Program cat found: YES (/usr/bin/cat) 00:02:34.661 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:34.661 Checking for size of "void *" : 8 00:02:34.661 Checking for size of "void *" : 8 (cached) 00:02:34.661 Library m found: YES 00:02:34.661 Library numa found: YES 00:02:34.661 Has header "numaif.h" : YES 00:02:34.661 Library fdt found: NO 00:02:34.661 Library execinfo found: NO 00:02:34.661 Has header "execinfo.h" : YES 00:02:34.661 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.661 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.661 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.661 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.661 Run-time dependency openssl found: YES 3.1.1 00:02:34.661 Run-time dependency libpcap found: YES 1.10.4 00:02:34.661 Has header "pcap.h" with dependency libpcap: YES 00:02:34.661 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.661 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.661 Compiler for C supports arguments -Wformat: YES 00:02:34.661 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.661 Compiler for C supports arguments -Wformat-security: NO 00:02:34.661 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.661 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.661 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.661 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.661 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.661 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.661 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.661 Compiler for C supports arguments -Wundef: YES 00:02:34.661 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.662 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.662 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.662 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.662 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.662 Compiler for C supports arguments -mavx512f: YES 00:02:34.662 Checking if "AVX512 checking" compiles: YES 00:02:34.662 Fetching value of define "__SSE4_2__" : 1 00:02:34.662 Fetching value of define "__AES__" : 1 00:02:34.662 Fetching value of define "__AVX__" : 1 00:02:34.662 Fetching value of define "__AVX2__" : 1 00:02:34.662 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.662 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.662 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.662 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.662 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.662 Fetching value of define "__PCLMUL__" : 1 00:02:34.662 Fetching value of define "__RDRND__" : 1 00:02:34.662 Fetching value of define "__RDSEED__" : 1 00:02:34.662 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.662 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.662 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.662 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.662 Checking for function "getentropy" : YES 00:02:34.662 Message: lib/eal: Defining dependency "eal" 00:02:34.662 Message: lib/ring: Defining dependency "ring" 00:02:34.662 Message: lib/rcu: Defining dependency "rcu" 00:02:34.662 Message: lib/mempool: Defining dependency "mempool" 00:02:34.662 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.662 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.662 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.662 Compiler for C supports arguments -mpclmul: YES 00:02:34.662 Compiler for C supports arguments -maes: YES 00:02:34.662 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.662 Compiler for C supports arguments -mavx512bw: YES 00:02:34.662 Compiler for C supports arguments -mavx512dq: YES 00:02:34.662 Compiler for C supports arguments -mavx512vl: YES 00:02:34.662 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.662 Compiler for C supports arguments -mavx2: YES 00:02:34.662 Compiler for C supports arguments -mavx: YES 00:02:34.662 Message: lib/net: Defining dependency "net" 00:02:34.662 Message: lib/meter: Defining dependency "meter" 00:02:34.662 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.662 Message: lib/pci: Defining dependency "pci" 00:02:34.662 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.662 Message: lib/metrics: Defining dependency "metrics" 00:02:34.662 Message: lib/hash: Defining dependency "hash" 00:02:34.662 Message: lib/timer: Defining dependency "timer" 00:02:34.662 Fetching value of define "__AVX2__" : 1 (cached) 00:02:34.662 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.662 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:34.662 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:34.662 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:34.662 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:34.662 Message: lib/acl: Defining dependency "acl" 00:02:34.662 Message: lib/bbdev: Defining dependency "bbdev" 00:02:34.662 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:34.662 Run-time dependency libelf found: YES 0.191 00:02:34.662 Message: lib/bpf: Defining dependency "bpf" 00:02:34.662 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:34.662 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.662 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.662 Message: lib/distributor: Defining dependency "distributor" 00:02:34.662 Message: lib/efd: Defining dependency "efd" 00:02:34.662 Message: lib/eventdev: Defining dependency "eventdev" 00:02:34.662 Message: lib/gpudev: Defining dependency "gpudev" 00:02:34.662 Message: lib/gro: Defining dependency "gro" 00:02:34.662 Message: lib/gso: Defining dependency "gso" 00:02:34.662 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:34.662 Message: lib/jobstats: Defining dependency "jobstats" 00:02:34.662 Message: lib/latencystats: Defining dependency "latencystats" 00:02:34.662 Message: lib/lpm: Defining dependency "lpm" 00:02:34.662 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.662 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:34.662 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:34.662 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:34.662 Message: lib/member: Defining dependency "member" 00:02:34.662 Message: lib/pcapng: Defining dependency "pcapng" 00:02:34.662 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.662 Message: lib/power: Defining dependency "power" 00:02:34.662 Message: lib/rawdev: Defining dependency "rawdev" 00:02:34.662 Message: lib/regexdev: Defining dependency "regexdev" 00:02:34.662 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.662 Message: lib/rib: Defining dependency "rib" 00:02:34.662 Message: lib/reorder: Defining dependency "reorder" 00:02:34.662 Message: lib/sched: Defining dependency "sched" 00:02:34.662 Message: lib/security: Defining dependency "security" 00:02:34.662 Message: lib/stack: Defining dependency "stack" 00:02:34.662 Has header "linux/userfaultfd.h" : YES 00:02:34.662 Message: lib/vhost: Defining dependency "vhost" 00:02:34.662 Message: lib/ipsec: Defining dependency "ipsec" 00:02:34.662 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.662 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:34.662 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:34.662 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:34.662 Message: lib/fib: Defining dependency "fib" 00:02:34.662 Message: lib/port: Defining dependency "port" 00:02:34.662 Message: lib/pdump: Defining dependency "pdump" 00:02:34.662 Message: lib/table: Defining dependency "table" 00:02:34.662 Message: lib/pipeline: Defining dependency "pipeline" 00:02:34.662 Message: lib/graph: Defining dependency "graph" 00:02:34.662 Message: lib/node: Defining dependency "node" 00:02:34.662 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.662 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.662 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.662 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.662 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:34.662 Compiler for C supports arguments -Wno-unused-value: YES 00:02:34.662 Compiler for C supports arguments -Wno-format: YES 00:02:34.662 Compiler for C supports arguments -Wno-format-security: YES 00:02:34.662 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:36.568 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:36.568 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:36.568 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:36.568 Fetching value of define "__AVX2__" : 1 (cached) 00:02:36.568 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:36.568 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.568 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:36.568 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:36.568 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:36.568 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:36.568 Configuring doxy-api.conf using configuration 00:02:36.568 Program sphinx-build found: NO 00:02:36.568 Configuring rte_build_config.h using configuration 00:02:36.568 Message: 00:02:36.568 ================= 00:02:36.568 Applications Enabled 00:02:36.568 ================= 00:02:36.568 00:02:36.568 apps: 00:02:36.568 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:36.568 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:36.568 test-security-perf, 00:02:36.568 00:02:36.568 Message: 00:02:36.568 ================= 00:02:36.568 Libraries Enabled 00:02:36.568 ================= 00:02:36.568 00:02:36.568 libs: 00:02:36.568 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:36.569 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:36.569 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:36.569 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:36.569 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:36.569 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:36.569 table, pipeline, graph, node, 00:02:36.569 00:02:36.569 Message: 00:02:36.569 =============== 00:02:36.569 Drivers Enabled 00:02:36.569 =============== 00:02:36.569 00:02:36.569 common: 00:02:36.569 00:02:36.569 bus: 00:02:36.569 pci, vdev, 00:02:36.569 mempool: 00:02:36.569 ring, 00:02:36.569 dma: 00:02:36.569 00:02:36.569 net: 00:02:36.569 i40e, 00:02:36.569 raw: 00:02:36.569 00:02:36.569 crypto: 00:02:36.569 00:02:36.569 compress: 00:02:36.569 00:02:36.569 regex: 00:02:36.569 00:02:36.569 vdpa: 00:02:36.569 00:02:36.569 event: 00:02:36.569 00:02:36.569 baseband: 00:02:36.569 00:02:36.569 gpu: 00:02:36.569 00:02:36.569 00:02:36.569 Message: 00:02:36.569 ================= 00:02:36.569 Content Skipped 00:02:36.569 ================= 00:02:36.569 00:02:36.569 apps: 00:02:36.569 00:02:36.569 libs: 00:02:36.569 kni: explicitly disabled via build config (deprecated lib) 00:02:36.569 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:36.569 00:02:36.569 drivers: 00:02:36.569 common/cpt: not in enabled drivers build config 00:02:36.569 common/dpaax: not in enabled drivers build config 00:02:36.569 common/iavf: not in enabled drivers build config 00:02:36.569 common/idpf: not in enabled drivers build config 00:02:36.569 common/mvep: not in enabled drivers build config 00:02:36.569 common/octeontx: not in enabled drivers build config 00:02:36.569 bus/auxiliary: not in enabled drivers build config 00:02:36.569 bus/dpaa: not in enabled drivers build config 00:02:36.569 bus/fslmc: not in enabled drivers build config 00:02:36.569 bus/ifpga: not in enabled drivers build config 00:02:36.569 bus/vmbus: not in enabled drivers build config 00:02:36.569 common/cnxk: not in enabled drivers build config 00:02:36.569 common/mlx5: not in enabled drivers build config 00:02:36.569 common/qat: not in enabled drivers build config 00:02:36.569 common/sfc_efx: not in enabled drivers build config 00:02:36.569 mempool/bucket: not in enabled drivers build config 00:02:36.569 mempool/cnxk: not in enabled drivers build config 00:02:36.569 mempool/dpaa: not in enabled drivers build config 00:02:36.569 mempool/dpaa2: not in enabled drivers build config 00:02:36.569 mempool/octeontx: not in enabled drivers build config 00:02:36.569 mempool/stack: not in enabled drivers build config 00:02:36.569 dma/cnxk: not in enabled drivers build config 00:02:36.569 dma/dpaa: not in enabled drivers build config 00:02:36.569 dma/dpaa2: not in enabled drivers build config 00:02:36.569 dma/hisilicon: not in enabled drivers build config 00:02:36.569 dma/idxd: not in enabled drivers build config 00:02:36.569 dma/ioat: not in enabled drivers build config 00:02:36.569 dma/skeleton: not in enabled drivers build config 00:02:36.569 net/af_packet: not in enabled drivers build config 00:02:36.569 net/af_xdp: not in enabled drivers build config 00:02:36.569 net/ark: not in enabled drivers build config 00:02:36.569 net/atlantic: not in enabled drivers build config 00:02:36.569 net/avp: not in enabled drivers build config 00:02:36.569 net/axgbe: not in enabled drivers build config 00:02:36.569 net/bnx2x: not in enabled drivers build config 00:02:36.569 net/bnxt: not in enabled drivers build config 00:02:36.569 net/bonding: not in enabled drivers build config 00:02:36.569 net/cnxk: not in enabled drivers build config 00:02:36.569 net/cxgbe: not in enabled drivers build config 00:02:36.569 net/dpaa: not in enabled drivers build config 00:02:36.569 net/dpaa2: not in enabled drivers build config 00:02:36.569 net/e1000: not in enabled drivers build config 00:02:36.569 net/ena: not in enabled drivers build config 00:02:36.569 net/enetc: not in enabled drivers build config 00:02:36.569 net/enetfec: not in enabled drivers build config 00:02:36.569 net/enic: not in enabled drivers build config 00:02:36.569 net/failsafe: not in enabled drivers build config 00:02:36.569 net/fm10k: not in enabled drivers build config 00:02:36.569 net/gve: not in enabled drivers build config 00:02:36.569 net/hinic: not in enabled drivers build config 00:02:36.569 net/hns3: not in enabled drivers build config 00:02:36.569 net/iavf: not in enabled drivers build config 00:02:36.569 net/ice: not in enabled drivers build config 00:02:36.569 net/idpf: not in enabled drivers build config 00:02:36.569 net/igc: not in enabled drivers build config 00:02:36.569 net/ionic: not in enabled drivers build config 00:02:36.569 net/ipn3ke: not in enabled drivers build config 00:02:36.569 net/ixgbe: not in enabled drivers build config 00:02:36.569 net/kni: not in enabled drivers build config 00:02:36.569 net/liquidio: not in enabled drivers build config 00:02:36.569 net/mana: not in enabled drivers build config 00:02:36.569 net/memif: not in enabled drivers build config 00:02:36.569 net/mlx4: not in enabled drivers build config 00:02:36.569 net/mlx5: not in enabled drivers build config 00:02:36.569 net/mvneta: not in enabled drivers build config 00:02:36.569 net/mvpp2: not in enabled drivers build config 00:02:36.569 net/netvsc: not in enabled drivers build config 00:02:36.569 net/nfb: not in enabled drivers build config 00:02:36.569 net/nfp: not in enabled drivers build config 00:02:36.569 net/ngbe: not in enabled drivers build config 00:02:36.569 net/null: not in enabled drivers build config 00:02:36.569 net/octeontx: not in enabled drivers build config 00:02:36.569 net/octeon_ep: not in enabled drivers build config 00:02:36.569 net/pcap: not in enabled drivers build config 00:02:36.569 net/pfe: not in enabled drivers build config 00:02:36.569 net/qede: not in enabled drivers build config 00:02:36.569 net/ring: not in enabled drivers build config 00:02:36.569 net/sfc: not in enabled drivers build config 00:02:36.569 net/softnic: not in enabled drivers build config 00:02:36.569 net/tap: not in enabled drivers build config 00:02:36.569 net/thunderx: not in enabled drivers build config 00:02:36.569 net/txgbe: not in enabled drivers build config 00:02:36.569 net/vdev_netvsc: not in enabled drivers build config 00:02:36.569 net/vhost: not in enabled drivers build config 00:02:36.569 net/virtio: not in enabled drivers build config 00:02:36.569 net/vmxnet3: not in enabled drivers build config 00:02:36.569 raw/cnxk_bphy: not in enabled drivers build config 00:02:36.569 raw/cnxk_gpio: not in enabled drivers build config 00:02:36.569 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:36.569 raw/ifpga: not in enabled drivers build config 00:02:36.569 raw/ntb: not in enabled drivers build config 00:02:36.569 raw/skeleton: not in enabled drivers build config 00:02:36.569 crypto/armv8: not in enabled drivers build config 00:02:36.569 crypto/bcmfs: not in enabled drivers build config 00:02:36.569 crypto/caam_jr: not in enabled drivers build config 00:02:36.569 crypto/ccp: not in enabled drivers build config 00:02:36.569 crypto/cnxk: not in enabled drivers build config 00:02:36.569 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.569 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.569 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.569 crypto/mlx5: not in enabled drivers build config 00:02:36.569 crypto/mvsam: not in enabled drivers build config 00:02:36.569 crypto/nitrox: not in enabled drivers build config 00:02:36.569 crypto/null: not in enabled drivers build config 00:02:36.569 crypto/octeontx: not in enabled drivers build config 00:02:36.569 crypto/openssl: not in enabled drivers build config 00:02:36.569 crypto/scheduler: not in enabled drivers build config 00:02:36.569 crypto/uadk: not in enabled drivers build config 00:02:36.569 crypto/virtio: not in enabled drivers build config 00:02:36.569 compress/isal: not in enabled drivers build config 00:02:36.569 compress/mlx5: not in enabled drivers build config 00:02:36.569 compress/octeontx: not in enabled drivers build config 00:02:36.569 compress/zlib: not in enabled drivers build config 00:02:36.569 regex/mlx5: not in enabled drivers build config 00:02:36.569 regex/cn9k: not in enabled drivers build config 00:02:36.569 vdpa/ifc: not in enabled drivers build config 00:02:36.569 vdpa/mlx5: not in enabled drivers build config 00:02:36.569 vdpa/sfc: not in enabled drivers build config 00:02:36.569 event/cnxk: not in enabled drivers build config 00:02:36.569 event/dlb2: not in enabled drivers build config 00:02:36.569 event/dpaa: not in enabled drivers build config 00:02:36.569 event/dpaa2: not in enabled drivers build config 00:02:36.569 event/dsw: not in enabled drivers build config 00:02:36.569 event/opdl: not in enabled drivers build config 00:02:36.569 event/skeleton: not in enabled drivers build config 00:02:36.569 event/sw: not in enabled drivers build config 00:02:36.569 event/octeontx: not in enabled drivers build config 00:02:36.569 baseband/acc: not in enabled drivers build config 00:02:36.569 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:36.569 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:36.569 baseband/la12xx: not in enabled drivers build config 00:02:36.569 baseband/null: not in enabled drivers build config 00:02:36.569 baseband/turbo_sw: not in enabled drivers build config 00:02:36.569 gpu/cuda: not in enabled drivers build config 00:02:36.569 00:02:36.569 00:02:36.569 Build targets in project: 314 00:02:36.569 00:02:36.569 DPDK 22.11.4 00:02:36.569 00:02:36.569 User defined options 00:02:36.569 libdir : lib 00:02:36.569 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:36.569 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:36.569 c_link_args : 00:02:36.569 enable_docs : false 00:02:36.570 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:36.570 enable_kmods : false 00:02:36.570 machine : native 00:02:36.570 tests : false 00:02:36.570 00:02:36.570 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.570 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:36.570 06:20:28 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:36.570 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:36.570 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:36.570 [2/743] Generating lib/rte_kvargs_def with a custom command 00:02:36.570 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:36.570 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:36.570 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:36.570 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:36.570 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.570 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:36.570 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:36.570 [10/743] Linking static target lib/librte_kvargs.a 00:02:36.570 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:36.570 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:36.570 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:36.570 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:36.828 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:36.828 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.828 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:36.828 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.828 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:36.828 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.828 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:36.828 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.828 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.828 [24/743] Linking target lib/librte_kvargs.so.23.0 00:02:36.828 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.086 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.087 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.087 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.087 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.087 [30/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.087 [31/743] Linking static target lib/librte_telemetry.a 00:02:37.087 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.087 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.087 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.087 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.087 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.087 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.345 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:37.345 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.345 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.345 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.346 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.346 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.346 [44/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.346 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.346 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.346 [47/743] Linking target lib/librte_telemetry.so.23.0 00:02:37.604 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.604 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.604 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:37.604 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:37.604 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.604 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.604 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.604 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.604 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:37.604 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.604 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.604 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.604 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.604 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.604 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.863 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.863 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.863 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:37.863 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.863 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.863 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.863 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.863 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.863 [71/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.863 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.863 [73/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.863 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.863 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.863 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.122 [77/743] Generating lib/rte_eal_def with a custom command 00:02:38.122 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:38.122 [79/743] Generating lib/rte_ring_def with a custom command 00:02:38.122 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.122 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.122 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:38.122 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:38.122 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.122 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:38.122 [86/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.122 [87/743] Linking static target lib/librte_ring.a 00:02:38.122 [88/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.122 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.122 [90/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.122 [91/743] Generating lib/rte_mempool_def with a custom command 00:02:38.122 [92/743] Generating lib/rte_mempool_mingw with a custom command 00:02:38.382 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.382 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.640 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.640 [96/743] Linking static target lib/librte_eal.a 00:02:38.640 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.640 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:38.640 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.640 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:38.899 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.899 [102/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.899 [103/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.899 [104/743] Linking static target lib/librte_rcu.a 00:02:38.899 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.158 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.158 [107/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.158 [108/743] Linking static target lib/librte_mempool.a 00:02:39.158 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.158 [110/743] Generating lib/rte_net_def with a custom command 00:02:39.158 [111/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:39.158 [112/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.158 [113/743] Generating lib/rte_net_mingw with a custom command 00:02:39.158 [114/743] Generating lib/rte_meter_def with a custom command 00:02:39.158 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:39.417 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.417 [117/743] Linking static target lib/librte_meter.a 00:02:39.417 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.417 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.417 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.417 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.676 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.676 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.676 [124/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.677 [125/743] Linking static target lib/librte_mbuf.a 00:02:39.677 [126/743] Linking static target lib/librte_net.a 00:02:39.936 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.936 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.936 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.936 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:39.936 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:40.195 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.195 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.195 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.454 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.713 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.713 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:40.713 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.713 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:40.713 [140/743] Generating lib/rte_pci_def with a custom command 00:02:40.713 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:40.713 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.713 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.713 [144/743] Linking static target lib/librte_pci.a 00:02:40.713 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.973 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.973 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.973 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.973 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.973 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.973 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.973 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.973 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.973 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:40.973 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.973 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:40.973 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:40.973 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:41.232 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.232 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:41.232 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:02:41.232 [162/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:41.232 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.232 [164/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.232 [165/743] Generating lib/rte_hash_def with a custom command 00:02:41.232 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:41.491 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:41.491 [168/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.491 [169/743] Generating lib/rte_timer_def with a custom command 00:02:41.491 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:41.491 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:41.491 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.491 [173/743] Linking static target lib/librte_cmdline.a 00:02:41.749 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:41.749 [175/743] Linking static target lib/librte_metrics.a 00:02:41.749 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:41.749 [177/743] Linking static target lib/librte_timer.a 00:02:42.008 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.008 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.267 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:42.267 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:42.267 [182/743] Linking static target lib/librte_ethdev.a 00:02:42.267 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.267 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.836 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:42.836 [186/743] Generating lib/rte_acl_def with a custom command 00:02:42.836 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:42.836 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:42.836 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:42.836 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:42.836 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:42.836 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:42.836 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:43.095 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:43.355 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:43.355 [196/743] Linking static target lib/librte_bitratestats.a 00:02:43.355 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:43.614 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.614 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:43.614 [200/743] Linking static target lib/librte_bbdev.a 00:02:43.614 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:43.873 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.873 [203/743] Linking static target lib/librte_hash.a 00:02:44.132 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:44.132 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:44.132 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:02:44.132 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:44.132 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:44.391 [209/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.391 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.650 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:44.650 [212/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:44.650 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:44.650 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:44.650 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:44.650 [216/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:44.909 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:44.909 [218/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:44.909 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:44.909 [220/743] Linking static target lib/librte_acl.a 00:02:44.909 [221/743] Linking static target lib/librte_cfgfile.a 00:02:44.909 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:44.909 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:45.168 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:45.168 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.168 [226/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:45.168 [227/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.168 [228/743] Generating lib/rte_cryptodev_def with a custom command 00:02:45.168 [229/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.168 [230/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:45.426 [231/743] Linking target lib/librte_eal.so.23.0 00:02:45.426 [232/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.427 [233/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.427 [234/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:45.427 [235/743] Linking target lib/librte_ring.so.23.0 00:02:45.685 [236/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:45.685 [237/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:45.685 [238/743] Linking target lib/librte_rcu.so.23.0 00:02:45.685 [239/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.685 [240/743] Linking target lib/librte_mempool.so.23.0 00:02:45.685 [241/743] Linking target lib/librte_meter.so.23.0 00:02:45.685 [242/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.685 [243/743] Linking target lib/librte_pci.so.23.0 00:02:45.685 [244/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:45.686 [245/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:45.686 [246/743] Linking target lib/librte_timer.so.23.0 00:02:46.012 [247/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:46.012 [248/743] Linking target lib/librte_mbuf.so.23.0 00:02:46.012 [249/743] Linking target lib/librte_acl.so.23.0 00:02:46.012 [250/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:46.012 [251/743] Linking static target lib/librte_bpf.a 00:02:46.012 [252/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:46.012 [253/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:46.012 [254/743] Linking target lib/librte_cfgfile.so.23.0 00:02:46.012 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:46.012 [256/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:46.012 [257/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.012 [258/743] Linking target lib/librte_net.so.23.0 00:02:46.012 [259/743] Linking static target lib/librte_compressdev.a 00:02:46.012 [260/743] Linking target lib/librte_bbdev.so.23.0 00:02:46.012 [261/743] Generating lib/rte_distributor_def with a custom command 00:02:46.012 [262/743] Generating lib/rte_distributor_mingw with a custom command 00:02:46.012 [263/743] Generating lib/rte_efd_def with a custom command 00:02:46.012 [264/743] Generating lib/rte_efd_mingw with a custom command 00:02:46.293 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:46.293 [266/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:46.293 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:46.293 [268/743] Linking target lib/librte_hash.so.23.0 00:02:46.293 [269/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.293 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:46.293 [271/743] Linking static target lib/librte_distributor.a 00:02:46.293 [272/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:46.553 [273/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.553 [274/743] Linking target lib/librte_distributor.so.23.0 00:02:46.812 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:46.812 [276/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.812 [277/743] Linking target lib/librte_ethdev.so.23.0 00:02:46.812 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:46.812 [279/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.812 [280/743] Linking target lib/librte_compressdev.so.23.0 00:02:46.812 [281/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:46.812 [282/743] Generating lib/rte_eventdev_def with a custom command 00:02:47.071 [283/743] Linking target lib/librte_metrics.so.23.0 00:02:47.071 [284/743] Linking target lib/librte_bpf.so.23.0 00:02:47.071 [285/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:47.071 [286/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:47.071 [287/743] Linking target lib/librte_bitratestats.so.23.0 00:02:47.071 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:47.071 [289/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:47.071 [290/743] Generating lib/rte_gpudev_def with a custom command 00:02:47.071 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:47.639 [292/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.639 [293/743] Linking static target lib/librte_cryptodev.a 00:02:47.639 [294/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:47.639 [295/743] Linking static target lib/librte_efd.a 00:02:47.639 [296/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:47.639 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.639 [298/743] Linking target lib/librte_efd.so.23.0 00:02:47.898 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:47.898 [300/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:47.898 [301/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:47.898 [302/743] Generating lib/rte_gro_def with a custom command 00:02:47.898 [303/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:47.898 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:47.898 [305/743] Linking static target lib/librte_gpudev.a 00:02:48.157 [306/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:48.417 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:48.417 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:48.681 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:48.681 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:48.681 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:48.681 [312/743] Generating lib/rte_gso_def with a custom command 00:02:48.681 [313/743] Linking static target lib/librte_gro.a 00:02:48.681 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:48.681 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:48.681 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.940 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:48.940 [318/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.940 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:48.940 [320/743] Linking target lib/librte_gro.so.23.0 00:02:48.940 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:49.200 [322/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:49.200 [323/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:49.200 [324/743] Linking static target lib/librte_eventdev.a 00:02:49.200 [325/743] Generating lib/rte_ip_frag_def with a custom command 00:02:49.200 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:49.200 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:49.200 [328/743] Linking static target lib/librte_gso.a 00:02:49.200 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:49.200 [330/743] Linking static target lib/librte_jobstats.a 00:02:49.458 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:49.458 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:49.458 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.458 [334/743] Linking target lib/librte_gso.so.23.0 00:02:49.458 [335/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.458 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:49.458 [337/743] Linking target lib/librte_cryptodev.so.23.0 00:02:49.458 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:49.458 [339/743] Generating lib/rte_latencystats_def with a custom command 00:02:49.458 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:49.458 [341/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:49.717 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:49.717 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:49.717 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:49.717 [345/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:49.717 [346/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.717 [347/743] Linking target lib/librte_jobstats.so.23.0 00:02:49.717 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:49.976 [349/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:49.976 [350/743] Linking static target lib/librte_ip_frag.a 00:02:50.235 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.235 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:50.235 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:50.235 [354/743] Linking static target lib/librte_latencystats.a 00:02:50.235 [355/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:50.235 [356/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:50.235 [357/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:50.235 [358/743] Generating lib/rte_member_def with a custom command 00:02:50.235 [359/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:50.235 [360/743] Generating lib/rte_member_mingw with a custom command 00:02:50.493 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:50.493 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:50.493 [363/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.493 [364/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:50.493 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:50.493 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.493 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.493 [368/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:50.493 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.752 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:50.752 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:50.752 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:50.752 [373/743] Linking static target lib/librte_lpm.a 00:02:51.010 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:51.011 [375/743] Generating lib/rte_power_def with a custom command 00:02:51.011 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:51.011 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.011 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:51.011 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.270 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:51.270 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:51.270 [382/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:51.270 [383/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.270 [384/743] Generating lib/rte_regexdev_def with a custom command 00:02:51.270 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:51.270 [386/743] Linking target lib/librte_lpm.so.23.0 00:02:51.270 [387/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:51.270 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:51.270 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:51.270 [390/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:51.270 [391/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:51.270 [392/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.270 [393/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:51.270 [394/743] Linking static target lib/librte_pcapng.a 00:02:51.529 [395/743] Generating lib/rte_rib_def with a custom command 00:02:51.529 [396/743] Generating lib/rte_rib_mingw with a custom command 00:02:51.529 [397/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:51.529 [398/743] Linking static target lib/librte_rawdev.a 00:02:51.529 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:51.529 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:51.529 [401/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.529 [402/743] Linking static target lib/librte_power.a 00:02:51.529 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.788 [404/743] Linking static target lib/librte_dmadev.a 00:02:51.788 [405/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.788 [406/743] Linking target lib/librte_pcapng.so.23.0 00:02:51.788 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:51.788 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.788 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:52.047 [410/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:52.047 [411/743] Linking static target lib/librte_regexdev.a 00:02:52.047 [412/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:52.047 [413/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:52.047 [414/743] Linking static target lib/librte_member.a 00:02:52.047 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:52.047 [416/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:52.047 [417/743] Generating lib/rte_sched_def with a custom command 00:02:52.047 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:52.047 [419/743] Generating lib/rte_security_def with a custom command 00:02:52.306 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:52.306 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:52.306 [422/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:52.306 [423/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.306 [424/743] Linking static target lib/librte_reorder.a 00:02:52.306 [425/743] Linking target lib/librte_dmadev.so.23.0 00:02:52.306 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:52.306 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:52.306 [428/743] Generating lib/rte_stack_def with a custom command 00:02:52.306 [429/743] Generating lib/rte_stack_mingw with a custom command 00:02:52.306 [430/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.306 [431/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:52.306 [432/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:52.306 [433/743] Linking static target lib/librte_stack.a 00:02:52.306 [434/743] Linking target lib/librte_member.so.23.0 00:02:52.565 [435/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.565 [436/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.565 [437/743] Linking target lib/librte_reorder.so.23.0 00:02:52.565 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:52.565 [439/743] Linking static target lib/librte_rib.a 00:02:52.565 [440/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.565 [441/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.565 [442/743] Linking target lib/librte_power.so.23.0 00:02:52.565 [443/743] Linking target lib/librte_stack.so.23.0 00:02:52.565 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.824 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:52.824 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:52.824 [447/743] Linking static target lib/librte_security.a 00:02:52.824 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.083 [449/743] Linking target lib/librte_rib.so.23.0 00:02:53.083 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:53.083 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:53.083 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:53.083 [453/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.342 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.342 [455/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:53.342 [456/743] Linking static target lib/librte_sched.a 00:02:53.342 [457/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.342 [458/743] Linking target lib/librte_security.so.23.0 00:02:53.342 [459/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.601 [460/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:53.601 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.861 [462/743] Linking target lib/librte_sched.so.23.0 00:02:53.861 [463/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:53.861 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.861 [465/743] Generating lib/rte_ipsec_def with a custom command 00:02:53.861 [466/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:54.120 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:54.120 [468/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:54.120 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.120 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:54.378 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:54.637 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:54.637 [473/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:54.637 [474/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:54.637 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:54.637 [476/743] Generating lib/rte_fib_def with a custom command 00:02:54.637 [477/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:54.637 [478/743] Generating lib/rte_fib_mingw with a custom command 00:02:54.896 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:54.896 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:54.896 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:54.896 [482/743] Linking static target lib/librte_ipsec.a 00:02:55.459 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.459 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:55.459 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:55.459 [486/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:55.459 [487/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:55.459 [488/743] Linking static target lib/librte_fib.a 00:02:55.717 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:55.717 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:55.717 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:55.976 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.976 [493/743] Linking target lib/librte_fib.so.23.0 00:02:55.976 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:56.542 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:56.542 [496/743] Generating lib/rte_port_def with a custom command 00:02:56.542 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:56.542 [498/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:56.542 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:56.542 [500/743] Generating lib/rte_pdump_def with a custom command 00:02:56.542 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:02:56.800 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:56.800 [503/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:56.800 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:57.058 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:57.058 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:57.058 [507/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:57.058 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:57.058 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:57.058 [510/743] Linking static target lib/librte_port.a 00:02:57.626 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:57.626 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:57.626 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:57.626 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.626 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:57.626 [516/743] Linking target lib/librte_port.so.23.0 00:02:57.885 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:57.885 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:57.885 [519/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:57.885 [520/743] Linking static target lib/librte_pdump.a 00:02:58.144 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.144 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:58.403 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:58.403 [524/743] Generating lib/rte_table_def with a custom command 00:02:58.403 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:58.403 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:58.403 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:58.662 [528/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.662 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:58.662 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:58.662 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:58.921 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:58.921 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:58.921 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:58.921 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:58.921 [536/743] Linking static target lib/librte_table.a 00:02:59.180 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:59.438 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:59.696 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:59.696 [540/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:59.697 [541/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.697 [542/743] Linking target lib/librte_table.so.23.0 00:02:59.697 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:59.697 [544/743] Generating lib/rte_graph_def with a custom command 00:02:59.955 [545/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:59.955 [546/743] Generating lib/rte_graph_mingw with a custom command 00:02:59.955 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:59.955 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:00.214 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:00.472 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:00.472 [551/743] Linking static target lib/librte_graph.a 00:03:00.472 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:00.731 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:00.731 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:00.731 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:00.990 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:00.990 [557/743] Generating lib/rte_node_def with a custom command 00:03:00.990 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:00.990 [559/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:01.248 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.248 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.248 [562/743] Linking target lib/librte_graph.so.23.0 00:03:01.248 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:01.248 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:01.248 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.248 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:01.248 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:01.507 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:01.507 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:01.507 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.507 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:01.507 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:01.507 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.507 [574/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:01.507 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:01.507 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:01.507 [577/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:01.507 [578/743] Linking static target lib/librte_node.a 00:03:01.764 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.764 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.764 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.764 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.764 [583/743] Linking target lib/librte_node.so.23.0 00:03:02.022 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.022 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.022 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.022 [587/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.022 [588/743] Linking static target drivers/librte_bus_vdev.a 00:03:02.280 [589/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.280 [590/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.280 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.280 [592/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.280 [593/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:02.280 [594/743] Linking static target drivers/librte_bus_pci.a 00:03:02.280 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.538 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:02.538 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:02.538 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:02.538 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:02.796 [600/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.796 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:02.796 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:02.796 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.796 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.054 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.054 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.054 [607/743] Linking static target drivers/librte_mempool_ring.a 00:03:03.054 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.054 [609/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:03.054 [610/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:03.312 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:03.879 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:03.879 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:03.879 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:04.446 [615/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:04.446 [616/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:04.446 [617/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:04.705 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:04.984 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:04.984 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:05.254 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:05.254 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:05.254 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:05.254 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:05.254 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:06.190 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:06.449 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:06.449 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:06.449 [629/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:06.449 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:06.708 [631/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:06.708 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:06.708 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:06.968 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:06.968 [635/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.968 [636/743] Linking static target lib/librte_vhost.a 00:03:06.968 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:06.968 [638/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:07.535 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:07.535 [640/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:07.535 [641/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:07.535 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:07.794 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:07.794 [644/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:07.794 [645/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:07.794 [646/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.053 [647/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.053 [648/743] Linking static target drivers/librte_net_i40e.a 00:03:08.053 [649/743] Linking target lib/librte_vhost.so.23.0 00:03:08.053 [650/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.053 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:08.053 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:08.312 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:08.571 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:08.571 [655/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.571 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:08.571 [657/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:08.571 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:08.830 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:09.088 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:09.088 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:09.347 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:09.347 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:09.347 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:09.347 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:09.347 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:09.347 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:09.347 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:09.606 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:09.865 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:10.124 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:10.124 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:10.124 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:10.692 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:10.692 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:10.952 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:10.952 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:11.210 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:11.210 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:11.469 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:11.469 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:11.469 [682/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:11.469 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:11.728 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:11.728 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:11.987 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:11.987 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:11.987 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:12.246 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:12.246 [690/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:12.246 [691/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:12.246 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:12.246 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:12.505 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:12.505 [695/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:12.505 [696/743] Linking static target lib/librte_pipeline.a 00:03:13.073 [697/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:13.073 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:13.073 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:13.073 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:13.330 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:13.330 [702/743] Linking target app/dpdk-pdump 00:03:13.330 [703/743] Linking target app/dpdk-dumpcap 00:03:13.589 [704/743] Linking target app/dpdk-proc-info 00:03:13.589 [705/743] Linking target app/dpdk-test-acl 00:03:13.589 [706/743] Linking target app/dpdk-test-bbdev 00:03:13.848 [707/743] Linking target app/dpdk-test-compress-perf 00:03:13.848 [708/743] Linking target app/dpdk-test-cmdline 00:03:13.848 [709/743] Linking target app/dpdk-test-crypto-perf 00:03:13.848 [710/743] Linking target app/dpdk-test-eventdev 00:03:14.119 [711/743] Linking target app/dpdk-test-fib 00:03:14.119 [712/743] Linking target app/dpdk-test-gpudev 00:03:14.119 [713/743] Linking target app/dpdk-test-flow-perf 00:03:14.119 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:14.405 [715/743] Linking target app/dpdk-test-pipeline 00:03:14.405 [716/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:14.972 [717/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:14.972 [718/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:14.972 [719/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:14.972 [720/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:14.972 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:14.972 [722/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.230 [723/743] Linking target lib/librte_pipeline.so.23.0 00:03:15.230 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:15.489 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:15.489 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:15.748 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:15.748 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:15.748 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:16.007 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:16.007 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:16.266 [732/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:16.266 [733/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:16.266 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:16.527 [735/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:16.527 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:16.527 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:16.786 [738/743] Linking target app/dpdk-test-sad 00:03:16.786 [739/743] Linking target app/dpdk-test-regex 00:03:17.045 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:17.045 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:17.303 [742/743] Linking target app/dpdk-test-security-perf 00:03:17.561 [743/743] Linking target app/dpdk-testpmd 00:03:17.561 06:21:10 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:17.561 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:17.561 [0/1] Installing files. 00:03:17.822 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.823 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.824 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.825 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.826 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.827 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:17.827 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.827 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.087 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.087 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.087 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.087 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:18.087 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.087 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.349 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.350 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.351 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:18.352 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:18.352 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:18.352 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:18.352 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:18.352 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:18.352 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:18.352 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:18.352 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:18.352 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:18.352 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:18.352 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:18.352 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:18.352 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:18.352 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:18.352 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:18.352 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:18.352 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:18.352 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:18.352 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:18.352 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:18.353 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:18.353 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:18.353 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:18.353 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:18.353 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:18.353 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:18.353 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:18.353 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:18.353 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:18.353 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:18.353 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:18.353 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:18.353 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:18.353 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:18.353 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:18.353 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:18.353 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:18.353 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:18.353 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:18.353 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:18.353 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:18.353 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:18.353 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:18.353 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:18.353 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:18.353 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:18.353 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:18.353 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:18.353 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:18.353 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:18.353 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:18.353 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:18.353 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:18.353 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:18.353 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:18.353 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:18.353 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:18.353 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:18.353 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:18.353 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:18.353 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:18.353 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:18.353 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:18.353 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:18.353 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:18.353 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:18.353 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:18.353 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:18.353 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:18.353 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:18.353 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:18.353 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:18.353 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:18.353 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:18.353 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:18.353 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:18.353 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:18.353 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:18.353 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:18.353 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:18.353 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:18.353 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:18.353 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:18.353 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:18.353 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:18.353 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:18.353 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:18.353 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:18.353 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:18.353 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:18.353 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:18.353 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:18.353 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:18.353 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:18.353 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:18.353 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:18.353 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:18.353 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:18.353 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:18.353 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:18.353 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:18.353 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:18.353 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:18.353 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:18.353 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:18.353 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:18.353 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:18.353 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:18.353 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:18.353 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:18.353 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:18.353 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:18.353 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:18.353 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:18.353 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:18.353 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:18.353 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:18.353 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:18.353 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:18.353 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:18.353 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:18.353 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:18.353 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:18.353 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:18.353 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:18.353 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:18.353 06:21:10 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:18.353 06:21:10 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:18.353 06:21:10 -- common/autobuild_common.sh@203 -- $ cat 00:03:18.353 06:21:10 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:18.354 00:03:18.354 real 0m49.115s 00:03:18.354 user 5m39.858s 00:03:18.354 sys 1m0.366s 00:03:18.354 06:21:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:18.354 06:21:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.354 ************************************ 00:03:18.354 END TEST build_native_dpdk 00:03:18.354 ************************************ 00:03:18.614 06:21:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:18.614 06:21:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:18.614 06:21:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:18.614 06:21:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:18.614 06:21:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:18.614 06:21:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:18.614 06:21:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:18.614 06:21:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:18.614 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:18.614 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:18.614 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:18.873 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:19.132 Using 'verbs' RDMA provider 00:03:34.576 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:46.784 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:46.784 go version go1.21.1 linux/amd64 00:03:46.784 Creating mk/config.mk...done. 00:03:46.784 Creating mk/cc.flags.mk...done. 00:03:46.784 Type 'make' to build. 00:03:46.784 06:21:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:46.784 06:21:39 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:46.784 06:21:39 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:46.784 06:21:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.784 ************************************ 00:03:46.784 START TEST make 00:03:46.784 ************************************ 00:03:46.784 06:21:39 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:46.784 make[1]: Nothing to be done for 'all'. 00:04:13.347 CC lib/ut/ut.o 00:04:13.347 CC lib/ut_mock/mock.o 00:04:13.347 CC lib/log/log_flags.o 00:04:13.347 CC lib/log/log.o 00:04:13.347 CC lib/log/log_deprecated.o 00:04:13.347 LIB libspdk_ut_mock.a 00:04:13.347 SO libspdk_ut_mock.so.5.0 00:04:13.347 LIB libspdk_ut.a 00:04:13.347 LIB libspdk_log.a 00:04:13.347 SO libspdk_ut.so.1.0 00:04:13.347 SO libspdk_log.so.6.1 00:04:13.347 SYMLINK libspdk_ut_mock.so 00:04:13.347 SYMLINK libspdk_ut.so 00:04:13.347 SYMLINK libspdk_log.so 00:04:13.347 CC lib/ioat/ioat.o 00:04:13.347 CC lib/dma/dma.o 00:04:13.347 CC lib/util/base64.o 00:04:13.347 CC lib/util/bit_array.o 00:04:13.347 CXX lib/trace_parser/trace.o 00:04:13.347 CC lib/util/cpuset.o 00:04:13.347 CC lib/util/crc16.o 00:04:13.347 CC lib/util/crc32c.o 00:04:13.347 CC lib/util/crc32.o 00:04:13.347 CC lib/vfio_user/host/vfio_user_pci.o 00:04:13.347 CC lib/util/crc32_ieee.o 00:04:13.347 CC lib/vfio_user/host/vfio_user.o 00:04:13.347 CC lib/util/crc64.o 00:04:13.347 CC lib/util/dif.o 00:04:13.347 LIB libspdk_dma.a 00:04:13.347 CC lib/util/fd.o 00:04:13.347 SO libspdk_dma.so.3.0 00:04:13.347 CC lib/util/file.o 00:04:13.347 SYMLINK libspdk_dma.so 00:04:13.347 LIB libspdk_ioat.a 00:04:13.347 CC lib/util/hexlify.o 00:04:13.347 CC lib/util/iov.o 00:04:13.347 CC lib/util/math.o 00:04:13.347 CC lib/util/pipe.o 00:04:13.347 SO libspdk_ioat.so.6.0 00:04:13.347 CC lib/util/strerror_tls.o 00:04:13.347 CC lib/util/string.o 00:04:13.347 LIB libspdk_vfio_user.a 00:04:13.347 SYMLINK libspdk_ioat.so 00:04:13.347 CC lib/util/uuid.o 00:04:13.347 SO libspdk_vfio_user.so.4.0 00:04:13.347 CC lib/util/fd_group.o 00:04:13.347 CC lib/util/xor.o 00:04:13.347 CC lib/util/zipf.o 00:04:13.347 SYMLINK libspdk_vfio_user.so 00:04:13.347 LIB libspdk_util.a 00:04:13.347 SO libspdk_util.so.8.0 00:04:13.347 SYMLINK libspdk_util.so 00:04:13.347 LIB libspdk_trace_parser.a 00:04:13.347 CC lib/conf/conf.o 00:04:13.347 CC lib/env_dpdk/env.o 00:04:13.347 CC lib/vmd/vmd.o 00:04:13.347 CC lib/env_dpdk/memory.o 00:04:13.347 CC lib/idxd/idxd.o 00:04:13.347 CC lib/vmd/led.o 00:04:13.347 CC lib/env_dpdk/pci.o 00:04:13.347 CC lib/rdma/common.o 00:04:13.347 CC lib/json/json_parse.o 00:04:13.347 SO libspdk_trace_parser.so.4.0 00:04:13.347 SYMLINK libspdk_trace_parser.so 00:04:13.347 CC lib/env_dpdk/init.o 00:04:13.347 CC lib/env_dpdk/threads.o 00:04:13.605 LIB libspdk_conf.a 00:04:13.605 CC lib/json/json_util.o 00:04:13.605 SO libspdk_conf.so.5.0 00:04:13.605 CC lib/rdma/rdma_verbs.o 00:04:13.605 CC lib/env_dpdk/pci_ioat.o 00:04:13.605 SYMLINK libspdk_conf.so 00:04:13.605 CC lib/env_dpdk/pci_virtio.o 00:04:13.605 CC lib/env_dpdk/pci_vmd.o 00:04:13.605 CC lib/idxd/idxd_user.o 00:04:13.605 CC lib/env_dpdk/pci_idxd.o 00:04:13.863 CC lib/env_dpdk/pci_event.o 00:04:13.863 CC lib/json/json_write.o 00:04:13.863 CC lib/env_dpdk/sigbus_handler.o 00:04:13.863 LIB libspdk_rdma.a 00:04:13.863 CC lib/env_dpdk/pci_dpdk.o 00:04:13.863 SO libspdk_rdma.so.5.0 00:04:13.863 CC lib/idxd/idxd_kernel.o 00:04:13.863 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:13.863 SYMLINK libspdk_rdma.so 00:04:13.863 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:13.863 LIB libspdk_vmd.a 00:04:13.863 SO libspdk_vmd.so.5.0 00:04:14.120 LIB libspdk_idxd.a 00:04:14.120 SYMLINK libspdk_vmd.so 00:04:14.120 SO libspdk_idxd.so.11.0 00:04:14.120 LIB libspdk_json.a 00:04:14.120 SO libspdk_json.so.5.1 00:04:14.120 SYMLINK libspdk_idxd.so 00:04:14.120 SYMLINK libspdk_json.so 00:04:14.378 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:14.378 CC lib/jsonrpc/jsonrpc_server.o 00:04:14.378 CC lib/jsonrpc/jsonrpc_client.o 00:04:14.378 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:14.636 LIB libspdk_jsonrpc.a 00:04:14.636 SO libspdk_jsonrpc.so.5.1 00:04:14.636 SYMLINK libspdk_jsonrpc.so 00:04:14.895 LIB libspdk_env_dpdk.a 00:04:14.895 SO libspdk_env_dpdk.so.13.0 00:04:14.895 CC lib/rpc/rpc.o 00:04:14.895 SYMLINK libspdk_env_dpdk.so 00:04:15.152 LIB libspdk_rpc.a 00:04:15.152 SO libspdk_rpc.so.5.0 00:04:15.152 SYMLINK libspdk_rpc.so 00:04:15.410 CC lib/trace/trace_flags.o 00:04:15.410 CC lib/trace/trace.o 00:04:15.410 CC lib/trace/trace_rpc.o 00:04:15.410 CC lib/notify/notify.o 00:04:15.410 CC lib/notify/notify_rpc.o 00:04:15.410 CC lib/sock/sock_rpc.o 00:04:15.410 CC lib/sock/sock.o 00:04:15.669 LIB libspdk_notify.a 00:04:15.669 LIB libspdk_trace.a 00:04:15.669 SO libspdk_notify.so.5.0 00:04:15.669 SO libspdk_trace.so.9.0 00:04:15.669 SYMLINK libspdk_notify.so 00:04:15.669 SYMLINK libspdk_trace.so 00:04:15.927 LIB libspdk_sock.a 00:04:15.927 SO libspdk_sock.so.8.0 00:04:15.927 CC lib/thread/thread.o 00:04:15.927 CC lib/thread/iobuf.o 00:04:15.927 SYMLINK libspdk_sock.so 00:04:16.185 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:16.185 CC lib/nvme/nvme_ctrlr.o 00:04:16.185 CC lib/nvme/nvme_fabric.o 00:04:16.185 CC lib/nvme/nvme_ns_cmd.o 00:04:16.185 CC lib/nvme/nvme_ns.o 00:04:16.185 CC lib/nvme/nvme_pcie_common.o 00:04:16.185 CC lib/nvme/nvme_pcie.o 00:04:16.185 CC lib/nvme/nvme_qpair.o 00:04:16.185 CC lib/nvme/nvme.o 00:04:16.752 CC lib/nvme/nvme_quirks.o 00:04:16.752 CC lib/nvme/nvme_transport.o 00:04:17.010 CC lib/nvme/nvme_discovery.o 00:04:17.010 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:17.010 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:17.010 CC lib/nvme/nvme_tcp.o 00:04:17.010 CC lib/nvme/nvme_opal.o 00:04:17.010 CC lib/nvme/nvme_io_msg.o 00:04:17.597 CC lib/nvme/nvme_poll_group.o 00:04:17.597 LIB libspdk_thread.a 00:04:17.597 CC lib/nvme/nvme_zns.o 00:04:17.597 SO libspdk_thread.so.9.0 00:04:17.597 SYMLINK libspdk_thread.so 00:04:17.597 CC lib/nvme/nvme_cuse.o 00:04:17.597 CC lib/nvme/nvme_vfio_user.o 00:04:17.597 CC lib/nvme/nvme_rdma.o 00:04:17.900 CC lib/blob/blobstore.o 00:04:17.900 CC lib/accel/accel.o 00:04:17.900 CC lib/init/json_config.o 00:04:18.158 CC lib/init/subsystem.o 00:04:18.158 CC lib/blob/request.o 00:04:18.158 CC lib/blob/zeroes.o 00:04:18.158 CC lib/init/subsystem_rpc.o 00:04:18.158 CC lib/init/rpc.o 00:04:18.158 CC lib/blob/blob_bs_dev.o 00:04:18.416 CC lib/accel/accel_rpc.o 00:04:18.416 CC lib/accel/accel_sw.o 00:04:18.416 LIB libspdk_init.a 00:04:18.416 SO libspdk_init.so.4.0 00:04:18.416 SYMLINK libspdk_init.so 00:04:18.416 CC lib/virtio/virtio.o 00:04:18.416 CC lib/virtio/virtio_vhost_user.o 00:04:18.416 CC lib/virtio/virtio_vfio_user.o 00:04:18.674 CC lib/virtio/virtio_pci.o 00:04:18.674 CC lib/event/app.o 00:04:18.674 CC lib/event/reactor.o 00:04:18.674 CC lib/event/log_rpc.o 00:04:18.674 LIB libspdk_accel.a 00:04:18.674 CC lib/event/app_rpc.o 00:04:18.674 SO libspdk_accel.so.14.0 00:04:18.932 CC lib/event/scheduler_static.o 00:04:18.932 LIB libspdk_virtio.a 00:04:18.932 SYMLINK libspdk_accel.so 00:04:18.932 SO libspdk_virtio.so.6.0 00:04:18.932 LIB libspdk_nvme.a 00:04:18.932 SYMLINK libspdk_virtio.so 00:04:18.932 CC lib/bdev/bdev.o 00:04:18.932 CC lib/bdev/bdev_zone.o 00:04:18.932 CC lib/bdev/bdev_rpc.o 00:04:18.932 CC lib/bdev/scsi_nvme.o 00:04:18.932 CC lib/bdev/part.o 00:04:18.932 LIB libspdk_event.a 00:04:19.190 SO libspdk_event.so.12.0 00:04:19.190 SO libspdk_nvme.so.12.0 00:04:19.190 SYMLINK libspdk_event.so 00:04:19.449 SYMLINK libspdk_nvme.so 00:04:20.383 LIB libspdk_blob.a 00:04:20.383 SO libspdk_blob.so.10.1 00:04:20.642 SYMLINK libspdk_blob.so 00:04:20.642 CC lib/blobfs/blobfs.o 00:04:20.642 CC lib/lvol/lvol.o 00:04:20.642 CC lib/blobfs/tree.o 00:04:21.576 LIB libspdk_blobfs.a 00:04:21.576 LIB libspdk_lvol.a 00:04:21.576 SO libspdk_blobfs.so.9.0 00:04:21.576 SO libspdk_lvol.so.9.1 00:04:21.576 SYMLINK libspdk_blobfs.so 00:04:21.576 SYMLINK libspdk_lvol.so 00:04:21.576 LIB libspdk_bdev.a 00:04:21.833 SO libspdk_bdev.so.14.0 00:04:21.833 SYMLINK libspdk_bdev.so 00:04:21.833 CC lib/nvmf/ctrlr.o 00:04:21.833 CC lib/nvmf/ctrlr_discovery.o 00:04:21.833 CC lib/nvmf/ctrlr_bdev.o 00:04:21.833 CC lib/nvmf/subsystem.o 00:04:21.833 CC lib/nvmf/nvmf.o 00:04:21.833 CC lib/nvmf/nvmf_rpc.o 00:04:21.833 CC lib/nbd/nbd.o 00:04:21.833 CC lib/scsi/dev.o 00:04:21.833 CC lib/ublk/ublk.o 00:04:22.091 CC lib/ftl/ftl_core.o 00:04:22.349 CC lib/scsi/lun.o 00:04:22.349 CC lib/ftl/ftl_init.o 00:04:22.349 CC lib/nbd/nbd_rpc.o 00:04:22.608 CC lib/nvmf/transport.o 00:04:22.608 CC lib/ftl/ftl_layout.o 00:04:22.608 CC lib/scsi/port.o 00:04:22.608 LIB libspdk_nbd.a 00:04:22.608 CC lib/ublk/ublk_rpc.o 00:04:22.608 SO libspdk_nbd.so.6.0 00:04:22.608 CC lib/ftl/ftl_debug.o 00:04:22.608 SYMLINK libspdk_nbd.so 00:04:22.608 CC lib/ftl/ftl_io.o 00:04:22.608 CC lib/scsi/scsi.o 00:04:22.866 CC lib/scsi/scsi_bdev.o 00:04:22.866 LIB libspdk_ublk.a 00:04:22.866 SO libspdk_ublk.so.2.0 00:04:22.866 CC lib/scsi/scsi_pr.o 00:04:22.866 CC lib/nvmf/tcp.o 00:04:22.866 CC lib/nvmf/rdma.o 00:04:22.866 SYMLINK libspdk_ublk.so 00:04:22.866 CC lib/ftl/ftl_sb.o 00:04:22.866 CC lib/scsi/scsi_rpc.o 00:04:22.866 CC lib/ftl/ftl_l2p.o 00:04:23.125 CC lib/scsi/task.o 00:04:23.125 CC lib/ftl/ftl_l2p_flat.o 00:04:23.125 CC lib/ftl/ftl_nv_cache.o 00:04:23.125 CC lib/ftl/ftl_band.o 00:04:23.125 CC lib/ftl/ftl_band_ops.o 00:04:23.125 CC lib/ftl/ftl_writer.o 00:04:23.125 CC lib/ftl/ftl_rq.o 00:04:23.125 LIB libspdk_scsi.a 00:04:23.383 CC lib/ftl/ftl_reloc.o 00:04:23.383 SO libspdk_scsi.so.8.0 00:04:23.383 SYMLINK libspdk_scsi.so 00:04:23.383 CC lib/ftl/ftl_l2p_cache.o 00:04:23.383 CC lib/ftl/ftl_p2l.o 00:04:23.383 CC lib/ftl/mngt/ftl_mngt.o 00:04:23.383 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:23.642 CC lib/iscsi/conn.o 00:04:23.642 CC lib/iscsi/init_grp.o 00:04:23.642 CC lib/vhost/vhost.o 00:04:23.642 CC lib/vhost/vhost_rpc.o 00:04:23.642 CC lib/vhost/vhost_scsi.o 00:04:23.900 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:23.901 CC lib/iscsi/iscsi.o 00:04:23.901 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:23.901 CC lib/vhost/vhost_blk.o 00:04:23.901 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:24.160 CC lib/iscsi/md5.o 00:04:24.160 CC lib/vhost/rte_vhost_user.o 00:04:24.160 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:24.418 CC lib/iscsi/param.o 00:04:24.418 CC lib/iscsi/portal_grp.o 00:04:24.419 CC lib/iscsi/tgt_node.o 00:04:24.419 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:24.419 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:24.677 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:24.677 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:24.677 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:24.677 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:24.677 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:24.677 CC lib/iscsi/iscsi_subsystem.o 00:04:24.677 LIB libspdk_nvmf.a 00:04:24.936 CC lib/ftl/utils/ftl_conf.o 00:04:24.936 CC lib/ftl/utils/ftl_md.o 00:04:24.936 CC lib/iscsi/iscsi_rpc.o 00:04:24.936 CC lib/iscsi/task.o 00:04:24.936 SO libspdk_nvmf.so.17.0 00:04:24.936 CC lib/ftl/utils/ftl_mempool.o 00:04:24.936 CC lib/ftl/utils/ftl_bitmap.o 00:04:24.936 CC lib/ftl/utils/ftl_property.o 00:04:24.936 SYMLINK libspdk_nvmf.so 00:04:24.936 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:25.194 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:25.194 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:25.194 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:25.194 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:25.194 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:25.194 LIB libspdk_vhost.a 00:04:25.194 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:25.194 LIB libspdk_iscsi.a 00:04:25.194 SO libspdk_vhost.so.7.1 00:04:25.194 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:25.194 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:25.452 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:25.452 SYMLINK libspdk_vhost.so 00:04:25.452 CC lib/ftl/base/ftl_base_dev.o 00:04:25.452 CC lib/ftl/base/ftl_base_bdev.o 00:04:25.452 SO libspdk_iscsi.so.7.0 00:04:25.452 CC lib/ftl/ftl_trace.o 00:04:25.452 SYMLINK libspdk_iscsi.so 00:04:25.710 LIB libspdk_ftl.a 00:04:25.969 SO libspdk_ftl.so.8.0 00:04:25.969 SYMLINK libspdk_ftl.so 00:04:26.228 CC module/env_dpdk/env_dpdk_rpc.o 00:04:26.488 CC module/accel/ioat/accel_ioat.o 00:04:26.488 CC module/blob/bdev/blob_bdev.o 00:04:26.488 CC module/scheduler/gscheduler/gscheduler.o 00:04:26.488 CC module/accel/error/accel_error.o 00:04:26.488 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:26.488 CC module/sock/posix/posix.o 00:04:26.488 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:26.488 CC module/accel/dsa/accel_dsa.o 00:04:26.488 CC module/accel/iaa/accel_iaa.o 00:04:26.488 LIB libspdk_env_dpdk_rpc.a 00:04:26.488 SO libspdk_env_dpdk_rpc.so.5.0 00:04:26.488 LIB libspdk_scheduler_gscheduler.a 00:04:26.488 LIB libspdk_scheduler_dpdk_governor.a 00:04:26.488 SYMLINK libspdk_env_dpdk_rpc.so 00:04:26.488 CC module/accel/error/accel_error_rpc.o 00:04:26.488 SO libspdk_scheduler_gscheduler.so.3.0 00:04:26.488 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:26.488 CC module/accel/ioat/accel_ioat_rpc.o 00:04:26.488 CC module/accel/iaa/accel_iaa_rpc.o 00:04:26.488 LIB libspdk_scheduler_dynamic.a 00:04:26.488 SYMLINK libspdk_scheduler_gscheduler.so 00:04:26.488 CC module/accel/dsa/accel_dsa_rpc.o 00:04:26.488 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:26.488 SO libspdk_scheduler_dynamic.so.3.0 00:04:26.746 SYMLINK libspdk_scheduler_dynamic.so 00:04:26.746 LIB libspdk_blob_bdev.a 00:04:26.746 SO libspdk_blob_bdev.so.10.1 00:04:26.746 LIB libspdk_accel_error.a 00:04:26.746 LIB libspdk_accel_ioat.a 00:04:26.746 LIB libspdk_accel_iaa.a 00:04:26.746 SO libspdk_accel_error.so.1.0 00:04:26.747 LIB libspdk_accel_dsa.a 00:04:26.747 SYMLINK libspdk_blob_bdev.so 00:04:26.747 SO libspdk_accel_ioat.so.5.0 00:04:26.747 SO libspdk_accel_iaa.so.2.0 00:04:26.747 SO libspdk_accel_dsa.so.4.0 00:04:26.747 SYMLINK libspdk_accel_error.so 00:04:26.747 SYMLINK libspdk_accel_ioat.so 00:04:26.747 SYMLINK libspdk_accel_iaa.so 00:04:26.747 SYMLINK libspdk_accel_dsa.so 00:04:27.005 CC module/bdev/malloc/bdev_malloc.o 00:04:27.005 CC module/bdev/gpt/gpt.o 00:04:27.005 CC module/bdev/nvme/bdev_nvme.o 00:04:27.005 CC module/bdev/null/bdev_null.o 00:04:27.005 CC module/bdev/lvol/vbdev_lvol.o 00:04:27.005 CC module/blobfs/bdev/blobfs_bdev.o 00:04:27.005 CC module/bdev/delay/vbdev_delay.o 00:04:27.005 CC module/bdev/error/vbdev_error.o 00:04:27.005 CC module/bdev/passthru/vbdev_passthru.o 00:04:27.005 LIB libspdk_sock_posix.a 00:04:27.005 SO libspdk_sock_posix.so.5.0 00:04:27.005 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:27.005 CC module/bdev/gpt/vbdev_gpt.o 00:04:27.264 CC module/bdev/null/bdev_null_rpc.o 00:04:27.264 SYMLINK libspdk_sock_posix.so 00:04:27.264 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:27.264 CC module/bdev/error/vbdev_error_rpc.o 00:04:27.264 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:27.264 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:27.264 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:27.264 LIB libspdk_blobfs_bdev.a 00:04:27.264 SO libspdk_blobfs_bdev.so.5.0 00:04:27.264 LIB libspdk_bdev_null.a 00:04:27.264 LIB libspdk_bdev_gpt.a 00:04:27.264 LIB libspdk_bdev_error.a 00:04:27.522 SO libspdk_bdev_null.so.5.0 00:04:27.522 SO libspdk_bdev_gpt.so.5.0 00:04:27.522 SYMLINK libspdk_blobfs_bdev.so 00:04:27.522 SO libspdk_bdev_error.so.5.0 00:04:27.522 LIB libspdk_bdev_passthru.a 00:04:27.522 LIB libspdk_bdev_malloc.a 00:04:27.522 LIB libspdk_bdev_delay.a 00:04:27.522 SYMLINK libspdk_bdev_gpt.so 00:04:27.522 SO libspdk_bdev_passthru.so.5.0 00:04:27.522 SYMLINK libspdk_bdev_null.so 00:04:27.522 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:27.522 SYMLINK libspdk_bdev_error.so 00:04:27.522 CC module/bdev/nvme/nvme_rpc.o 00:04:27.522 SO libspdk_bdev_malloc.so.5.0 00:04:27.522 SO libspdk_bdev_delay.so.5.0 00:04:27.522 LIB libspdk_bdev_lvol.a 00:04:27.522 SYMLINK libspdk_bdev_passthru.so 00:04:27.522 SO libspdk_bdev_lvol.so.5.0 00:04:27.522 SYMLINK libspdk_bdev_malloc.so 00:04:27.522 CC module/bdev/raid/bdev_raid.o 00:04:27.522 CC module/bdev/split/vbdev_split.o 00:04:27.522 SYMLINK libspdk_bdev_delay.so 00:04:27.522 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:27.522 SYMLINK libspdk_bdev_lvol.so 00:04:27.780 CC module/bdev/aio/bdev_aio.o 00:04:27.780 CC module/bdev/ftl/bdev_ftl.o 00:04:27.780 CC module/bdev/iscsi/bdev_iscsi.o 00:04:27.780 CC module/bdev/nvme/bdev_mdns_client.o 00:04:27.780 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:27.780 CC module/bdev/split/vbdev_split_rpc.o 00:04:28.038 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:28.038 CC module/bdev/aio/bdev_aio_rpc.o 00:04:28.038 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.038 LIB libspdk_bdev_split.a 00:04:28.038 CC module/bdev/nvme/vbdev_opal.o 00:04:28.038 SO libspdk_bdev_split.so.5.0 00:04:28.038 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:28.038 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:28.038 SYMLINK libspdk_bdev_split.so 00:04:28.038 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:28.038 LIB libspdk_bdev_zone_block.a 00:04:28.038 LIB libspdk_bdev_aio.a 00:04:28.038 SO libspdk_bdev_zone_block.so.5.0 00:04:28.038 SO libspdk_bdev_aio.so.5.0 00:04:28.298 LIB libspdk_bdev_ftl.a 00:04:28.298 SYMLINK libspdk_bdev_zone_block.so 00:04:28.298 CC module/bdev/raid/bdev_raid_rpc.o 00:04:28.298 SYMLINK libspdk_bdev_aio.so 00:04:28.298 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.298 SO libspdk_bdev_ftl.so.5.0 00:04:28.298 LIB libspdk_bdev_iscsi.a 00:04:28.298 CC module/bdev/raid/bdev_raid_sb.o 00:04:28.298 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.298 SO libspdk_bdev_iscsi.so.5.0 00:04:28.298 SYMLINK libspdk_bdev_ftl.so 00:04:28.298 CC module/bdev/raid/raid0.o 00:04:28.298 LIB libspdk_bdev_virtio.a 00:04:28.298 CC module/bdev/raid/raid1.o 00:04:28.298 SYMLINK libspdk_bdev_iscsi.so 00:04:28.298 CC module/bdev/raid/concat.o 00:04:28.298 SO libspdk_bdev_virtio.so.5.0 00:04:28.298 SYMLINK libspdk_bdev_virtio.so 00:04:28.557 LIB libspdk_bdev_raid.a 00:04:28.557 SO libspdk_bdev_raid.so.5.0 00:04:28.816 SYMLINK libspdk_bdev_raid.so 00:04:29.075 LIB libspdk_bdev_nvme.a 00:04:29.075 SO libspdk_bdev_nvme.so.6.0 00:04:29.075 SYMLINK libspdk_bdev_nvme.so 00:04:29.333 CC module/event/subsystems/iobuf/iobuf.o 00:04:29.333 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:29.333 CC module/event/subsystems/sock/sock.o 00:04:29.333 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:29.333 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:29.333 CC module/event/subsystems/vmd/vmd.o 00:04:29.333 CC module/event/subsystems/scheduler/scheduler.o 00:04:29.592 LIB libspdk_event_sock.a 00:04:29.592 LIB libspdk_event_vhost_blk.a 00:04:29.592 LIB libspdk_event_scheduler.a 00:04:29.592 LIB libspdk_event_vmd.a 00:04:29.592 SO libspdk_event_vhost_blk.so.2.0 00:04:29.592 SO libspdk_event_sock.so.4.0 00:04:29.592 SO libspdk_event_scheduler.so.3.0 00:04:29.592 SO libspdk_event_vmd.so.5.0 00:04:29.592 LIB libspdk_event_iobuf.a 00:04:29.592 SYMLINK libspdk_event_vhost_blk.so 00:04:29.592 SYMLINK libspdk_event_sock.so 00:04:29.592 SYMLINK libspdk_event_scheduler.so 00:04:29.592 SO libspdk_event_iobuf.so.2.0 00:04:29.592 SYMLINK libspdk_event_vmd.so 00:04:29.592 SYMLINK libspdk_event_iobuf.so 00:04:29.851 CC module/event/subsystems/accel/accel.o 00:04:30.109 LIB libspdk_event_accel.a 00:04:30.109 SO libspdk_event_accel.so.5.0 00:04:30.109 SYMLINK libspdk_event_accel.so 00:04:30.368 CC module/event/subsystems/bdev/bdev.o 00:04:30.626 LIB libspdk_event_bdev.a 00:04:30.627 SO libspdk_event_bdev.so.5.0 00:04:30.627 SYMLINK libspdk_event_bdev.so 00:04:30.885 CC module/event/subsystems/nbd/nbd.o 00:04:30.885 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:30.885 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:30.885 CC module/event/subsystems/ublk/ublk.o 00:04:30.885 CC module/event/subsystems/scsi/scsi.o 00:04:30.885 LIB libspdk_event_ublk.a 00:04:30.885 LIB libspdk_event_nbd.a 00:04:30.885 LIB libspdk_event_scsi.a 00:04:30.885 SO libspdk_event_ublk.so.2.0 00:04:31.144 SO libspdk_event_nbd.so.5.0 00:04:31.144 SO libspdk_event_scsi.so.5.0 00:04:31.144 SYMLINK libspdk_event_ublk.so 00:04:31.144 SYMLINK libspdk_event_nbd.so 00:04:31.144 LIB libspdk_event_nvmf.a 00:04:31.144 SYMLINK libspdk_event_scsi.so 00:04:31.144 SO libspdk_event_nvmf.so.5.0 00:04:31.144 SYMLINK libspdk_event_nvmf.so 00:04:31.403 CC module/event/subsystems/iscsi/iscsi.o 00:04:31.403 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:31.403 LIB libspdk_event_vhost_scsi.a 00:04:31.403 LIB libspdk_event_iscsi.a 00:04:31.403 SO libspdk_event_vhost_scsi.so.2.0 00:04:31.403 SO libspdk_event_iscsi.so.5.0 00:04:31.668 SYMLINK libspdk_event_vhost_scsi.so 00:04:31.668 SYMLINK libspdk_event_iscsi.so 00:04:31.668 SO libspdk.so.5.0 00:04:31.668 SYMLINK libspdk.so 00:04:31.929 CC app/trace_record/trace_record.o 00:04:31.929 CXX app/trace/trace.o 00:04:31.929 CC app/iscsi_tgt/iscsi_tgt.o 00:04:31.929 CC app/nvmf_tgt/nvmf_main.o 00:04:31.929 CC examples/ioat/perf/perf.o 00:04:31.929 CC app/spdk_tgt/spdk_tgt.o 00:04:31.929 CC examples/accel/perf/accel_perf.o 00:04:31.929 CC test/accel/dif/dif.o 00:04:31.929 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.929 CC examples/blob/hello_world/hello_blob.o 00:04:32.187 LINK spdk_trace_record 00:04:32.187 LINK nvmf_tgt 00:04:32.187 LINK iscsi_tgt 00:04:32.187 LINK ioat_perf 00:04:32.187 LINK spdk_tgt 00:04:32.187 LINK hello_bdev 00:04:32.187 LINK hello_blob 00:04:32.187 LINK spdk_trace 00:04:32.445 LINK dif 00:04:32.445 CC examples/blob/cli/blobcli.o 00:04:32.445 CC examples/ioat/verify/verify.o 00:04:32.445 LINK accel_perf 00:04:32.445 CC app/spdk_lspci/spdk_lspci.o 00:04:32.445 CC examples/nvme/hello_world/hello_world.o 00:04:32.445 CC examples/nvme/reconnect/reconnect.o 00:04:32.445 CC examples/sock/hello_world/hello_sock.o 00:04:32.445 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.704 CC examples/bdev/bdevperf/bdevperf.o 00:04:32.704 LINK spdk_lspci 00:04:32.704 LINK verify 00:04:32.704 CC test/app/bdev_svc/bdev_svc.o 00:04:32.704 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:32.704 LINK hello_world 00:04:32.704 LINK hello_sock 00:04:32.704 CC app/spdk_nvme_perf/perf.o 00:04:32.962 LINK blobcli 00:04:32.962 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.962 LINK reconnect 00:04:32.962 LINK bdev_svc 00:04:32.962 CC examples/vmd/led/led.o 00:04:32.962 CC examples/nvme/arbitration/arbitration.o 00:04:32.962 LINK lsvmd 00:04:32.962 LINK nvme_manage 00:04:32.962 CC test/app/histogram_perf/histogram_perf.o 00:04:32.962 LINK led 00:04:32.962 CC test/app/jsoncat/jsoncat.o 00:04:33.221 LINK nvme_fuzz 00:04:33.221 CC test/app/stub/stub.o 00:04:33.221 LINK histogram_perf 00:04:33.221 LINK jsoncat 00:04:33.221 CC examples/nvme/hotplug/hotplug.o 00:04:33.221 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.221 CC test/bdev/bdevio/bdevio.o 00:04:33.221 LINK bdevperf 00:04:33.221 LINK stub 00:04:33.221 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:33.221 LINK arbitration 00:04:33.479 TEST_HEADER include/spdk/accel.h 00:04:33.479 TEST_HEADER include/spdk/accel_module.h 00:04:33.479 TEST_HEADER include/spdk/assert.h 00:04:33.479 TEST_HEADER include/spdk/barrier.h 00:04:33.479 TEST_HEADER include/spdk/base64.h 00:04:33.479 TEST_HEADER include/spdk/bdev.h 00:04:33.479 TEST_HEADER include/spdk/bdev_module.h 00:04:33.479 TEST_HEADER include/spdk/bdev_zone.h 00:04:33.479 TEST_HEADER include/spdk/bit_array.h 00:04:33.479 TEST_HEADER include/spdk/bit_pool.h 00:04:33.479 TEST_HEADER include/spdk/blob_bdev.h 00:04:33.479 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:33.479 TEST_HEADER include/spdk/blobfs.h 00:04:33.479 TEST_HEADER include/spdk/blob.h 00:04:33.479 TEST_HEADER include/spdk/conf.h 00:04:33.479 TEST_HEADER include/spdk/config.h 00:04:33.479 TEST_HEADER include/spdk/cpuset.h 00:04:33.479 TEST_HEADER include/spdk/crc16.h 00:04:33.479 TEST_HEADER include/spdk/crc32.h 00:04:33.479 TEST_HEADER include/spdk/crc64.h 00:04:33.479 TEST_HEADER include/spdk/dif.h 00:04:33.479 TEST_HEADER include/spdk/dma.h 00:04:33.479 TEST_HEADER include/spdk/endian.h 00:04:33.479 TEST_HEADER include/spdk/env_dpdk.h 00:04:33.479 TEST_HEADER include/spdk/env.h 00:04:33.479 TEST_HEADER include/spdk/event.h 00:04:33.479 LINK cmb_copy 00:04:33.479 TEST_HEADER include/spdk/fd_group.h 00:04:33.479 TEST_HEADER include/spdk/fd.h 00:04:33.479 TEST_HEADER include/spdk/file.h 00:04:33.479 TEST_HEADER include/spdk/ftl.h 00:04:33.479 TEST_HEADER include/spdk/gpt_spec.h 00:04:33.479 LINK hotplug 00:04:33.479 TEST_HEADER include/spdk/hexlify.h 00:04:33.479 TEST_HEADER include/spdk/histogram_data.h 00:04:33.479 TEST_HEADER include/spdk/idxd.h 00:04:33.479 TEST_HEADER include/spdk/idxd_spec.h 00:04:33.479 TEST_HEADER include/spdk/init.h 00:04:33.479 TEST_HEADER include/spdk/ioat.h 00:04:33.479 CC test/blobfs/mkfs/mkfs.o 00:04:33.479 TEST_HEADER include/spdk/ioat_spec.h 00:04:33.479 TEST_HEADER include/spdk/iscsi_spec.h 00:04:33.479 TEST_HEADER include/spdk/json.h 00:04:33.479 TEST_HEADER include/spdk/jsonrpc.h 00:04:33.479 TEST_HEADER include/spdk/likely.h 00:04:33.479 TEST_HEADER include/spdk/log.h 00:04:33.479 TEST_HEADER include/spdk/lvol.h 00:04:33.479 TEST_HEADER include/spdk/memory.h 00:04:33.479 TEST_HEADER include/spdk/mmio.h 00:04:33.479 TEST_HEADER include/spdk/nbd.h 00:04:33.479 TEST_HEADER include/spdk/notify.h 00:04:33.479 TEST_HEADER include/spdk/nvme.h 00:04:33.479 TEST_HEADER include/spdk/nvme_intel.h 00:04:33.479 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:33.479 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:33.479 TEST_HEADER include/spdk/nvme_spec.h 00:04:33.479 TEST_HEADER include/spdk/nvme_zns.h 00:04:33.479 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:33.479 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:33.479 TEST_HEADER include/spdk/nvmf.h 00:04:33.479 TEST_HEADER include/spdk/nvmf_spec.h 00:04:33.479 TEST_HEADER include/spdk/nvmf_transport.h 00:04:33.479 TEST_HEADER include/spdk/opal.h 00:04:33.479 TEST_HEADER include/spdk/opal_spec.h 00:04:33.479 TEST_HEADER include/spdk/pci_ids.h 00:04:33.479 TEST_HEADER include/spdk/pipe.h 00:04:33.479 TEST_HEADER include/spdk/queue.h 00:04:33.479 TEST_HEADER include/spdk/reduce.h 00:04:33.479 LINK spdk_nvme_perf 00:04:33.479 TEST_HEADER include/spdk/rpc.h 00:04:33.479 TEST_HEADER include/spdk/scheduler.h 00:04:33.479 TEST_HEADER include/spdk/scsi.h 00:04:33.479 TEST_HEADER include/spdk/scsi_spec.h 00:04:33.479 TEST_HEADER include/spdk/sock.h 00:04:33.479 TEST_HEADER include/spdk/stdinc.h 00:04:33.479 TEST_HEADER include/spdk/string.h 00:04:33.479 TEST_HEADER include/spdk/thread.h 00:04:33.479 TEST_HEADER include/spdk/trace.h 00:04:33.479 TEST_HEADER include/spdk/trace_parser.h 00:04:33.479 TEST_HEADER include/spdk/tree.h 00:04:33.479 TEST_HEADER include/spdk/ublk.h 00:04:33.479 TEST_HEADER include/spdk/util.h 00:04:33.479 TEST_HEADER include/spdk/uuid.h 00:04:33.479 TEST_HEADER include/spdk/version.h 00:04:33.479 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:33.479 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:33.479 CC test/dma/test_dma/test_dma.o 00:04:33.479 TEST_HEADER include/spdk/vhost.h 00:04:33.479 TEST_HEADER include/spdk/vmd.h 00:04:33.479 TEST_HEADER include/spdk/xor.h 00:04:33.479 TEST_HEADER include/spdk/zipf.h 00:04:33.479 CXX test/cpp_headers/accel.o 00:04:33.479 CC test/event/event_perf/event_perf.o 00:04:33.479 CXX test/cpp_headers/accel_module.o 00:04:33.737 CC test/env/mem_callbacks/mem_callbacks.o 00:04:33.737 LINK bdevio 00:04:33.737 CC examples/nvme/abort/abort.o 00:04:33.737 LINK mkfs 00:04:33.737 LINK event_perf 00:04:33.737 CC app/spdk_nvme_identify/identify.o 00:04:33.737 CXX test/cpp_headers/assert.o 00:04:33.737 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:33.737 LINK mem_callbacks 00:04:33.996 CC test/event/reactor/reactor.o 00:04:33.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:33.996 CC test/event/reactor_perf/reactor_perf.o 00:04:33.996 CXX test/cpp_headers/barrier.o 00:04:33.996 LINK test_dma 00:04:33.996 LINK pmr_persistence 00:04:33.996 CC test/env/vtophys/vtophys.o 00:04:33.996 LINK abort 00:04:33.996 LINK reactor 00:04:33.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:33.996 LINK reactor_perf 00:04:33.996 CXX test/cpp_headers/base64.o 00:04:34.254 LINK vtophys 00:04:34.254 CC test/event/app_repeat/app_repeat.o 00:04:34.254 CC test/rpc_client/rpc_client_test.o 00:04:34.254 CC test/nvme/aer/aer.o 00:04:34.254 CXX test/cpp_headers/bdev.o 00:04:34.254 CC test/lvol/esnap/esnap.o 00:04:34.254 CC examples/nvmf/nvmf/nvmf.o 00:04:34.254 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:34.512 LINK app_repeat 00:04:34.512 LINK spdk_nvme_identify 00:04:34.512 LINK rpc_client_test 00:04:34.512 CXX test/cpp_headers/bdev_module.o 00:04:34.512 LINK vhost_fuzz 00:04:34.512 LINK aer 00:04:34.512 LINK env_dpdk_post_init 00:04:34.770 LINK nvmf 00:04:34.770 CC app/spdk_nvme_discover/discovery_aer.o 00:04:34.770 CXX test/cpp_headers/bdev_zone.o 00:04:34.770 CC app/spdk_top/spdk_top.o 00:04:34.770 CC test/event/scheduler/scheduler.o 00:04:34.770 CC test/nvme/reset/reset.o 00:04:34.770 CC app/vhost/vhost.o 00:04:34.770 CC test/env/memory/memory_ut.o 00:04:34.770 CXX test/cpp_headers/bit_array.o 00:04:34.770 LINK spdk_nvme_discover 00:04:34.770 LINK iscsi_fuzz 00:04:35.030 LINK scheduler 00:04:35.030 LINK vhost 00:04:35.030 CC examples/util/zipf/zipf.o 00:04:35.030 LINK reset 00:04:35.030 CXX test/cpp_headers/bit_pool.o 00:04:35.030 LINK zipf 00:04:35.030 CC examples/thread/thread/thread_ex.o 00:04:35.289 CXX test/cpp_headers/blob_bdev.o 00:04:35.289 CC test/nvme/sgl/sgl.o 00:04:35.289 CC app/spdk_dd/spdk_dd.o 00:04:35.289 CC examples/idxd/perf/perf.o 00:04:35.289 LINK memory_ut 00:04:35.289 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:35.289 CXX test/cpp_headers/blobfs_bdev.o 00:04:35.548 LINK thread 00:04:35.548 LINK sgl 00:04:35.548 LINK spdk_top 00:04:35.548 CXX test/cpp_headers/blobfs.o 00:04:35.548 CC test/env/pci/pci_ut.o 00:04:35.548 LINK interrupt_tgt 00:04:35.548 LINK idxd_perf 00:04:35.548 LINK spdk_dd 00:04:35.548 CC test/nvme/e2edp/nvme_dp.o 00:04:35.548 CC test/nvme/overhead/overhead.o 00:04:35.807 CC test/nvme/err_injection/err_injection.o 00:04:35.807 CXX test/cpp_headers/blob.o 00:04:35.807 CXX test/cpp_headers/conf.o 00:04:35.807 CC test/nvme/startup/startup.o 00:04:35.807 CC app/fio/nvme/fio_plugin.o 00:04:35.807 LINK err_injection 00:04:35.807 CXX test/cpp_headers/config.o 00:04:35.807 LINK pci_ut 00:04:35.807 CXX test/cpp_headers/cpuset.o 00:04:35.807 LINK nvme_dp 00:04:35.807 LINK overhead 00:04:36.065 LINK startup 00:04:36.065 CC app/fio/bdev/fio_plugin.o 00:04:36.065 CXX test/cpp_headers/crc16.o 00:04:36.065 CXX test/cpp_headers/crc32.o 00:04:36.065 CC test/nvme/reserve/reserve.o 00:04:36.065 CC test/nvme/simple_copy/simple_copy.o 00:04:36.065 CC test/thread/poller_perf/poller_perf.o 00:04:36.323 CC test/nvme/connect_stress/connect_stress.o 00:04:36.323 CXX test/cpp_headers/crc64.o 00:04:36.323 CXX test/cpp_headers/dif.o 00:04:36.323 LINK poller_perf 00:04:36.323 LINK spdk_nvme 00:04:36.323 LINK reserve 00:04:36.323 LINK connect_stress 00:04:36.581 LINK simple_copy 00:04:36.581 CXX test/cpp_headers/dma.o 00:04:36.581 CXX test/cpp_headers/endian.o 00:04:36.581 LINK spdk_bdev 00:04:36.581 CXX test/cpp_headers/env_dpdk.o 00:04:36.581 CXX test/cpp_headers/env.o 00:04:36.581 CC test/nvme/boot_partition/boot_partition.o 00:04:36.581 CXX test/cpp_headers/event.o 00:04:36.581 CXX test/cpp_headers/fd_group.o 00:04:36.581 CC test/nvme/compliance/nvme_compliance.o 00:04:36.581 CXX test/cpp_headers/fd.o 00:04:36.581 CC test/nvme/fused_ordering/fused_ordering.o 00:04:36.840 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:36.840 CC test/nvme/fdp/fdp.o 00:04:36.840 LINK boot_partition 00:04:36.840 CXX test/cpp_headers/file.o 00:04:36.840 CXX test/cpp_headers/ftl.o 00:04:36.840 LINK fused_ordering 00:04:36.840 CXX test/cpp_headers/gpt_spec.o 00:04:36.840 CC test/nvme/cuse/cuse.o 00:04:36.840 LINK nvme_compliance 00:04:37.098 LINK doorbell_aers 00:04:37.098 CXX test/cpp_headers/hexlify.o 00:04:37.098 CXX test/cpp_headers/histogram_data.o 00:04:37.098 CXX test/cpp_headers/idxd.o 00:04:37.098 LINK fdp 00:04:37.098 CXX test/cpp_headers/idxd_spec.o 00:04:37.098 CXX test/cpp_headers/init.o 00:04:37.098 CXX test/cpp_headers/ioat.o 00:04:37.098 CXX test/cpp_headers/ioat_spec.o 00:04:37.098 CXX test/cpp_headers/iscsi_spec.o 00:04:37.098 CXX test/cpp_headers/json.o 00:04:37.098 CXX test/cpp_headers/jsonrpc.o 00:04:37.357 CXX test/cpp_headers/likely.o 00:04:37.357 CXX test/cpp_headers/log.o 00:04:37.357 CXX test/cpp_headers/lvol.o 00:04:37.357 CXX test/cpp_headers/memory.o 00:04:37.357 CXX test/cpp_headers/mmio.o 00:04:37.357 CXX test/cpp_headers/nbd.o 00:04:37.357 CXX test/cpp_headers/notify.o 00:04:37.357 CXX test/cpp_headers/nvme.o 00:04:37.357 CXX test/cpp_headers/nvme_intel.o 00:04:37.357 CXX test/cpp_headers/nvme_ocssd.o 00:04:37.357 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:37.615 CXX test/cpp_headers/nvme_spec.o 00:04:37.615 CXX test/cpp_headers/nvme_zns.o 00:04:37.615 CXX test/cpp_headers/nvmf_cmd.o 00:04:37.615 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:37.615 CXX test/cpp_headers/nvmf.o 00:04:37.615 CXX test/cpp_headers/nvmf_spec.o 00:04:37.615 CXX test/cpp_headers/nvmf_transport.o 00:04:37.615 CXX test/cpp_headers/opal.o 00:04:37.873 CXX test/cpp_headers/opal_spec.o 00:04:37.873 CXX test/cpp_headers/pci_ids.o 00:04:37.873 CXX test/cpp_headers/pipe.o 00:04:37.873 CXX test/cpp_headers/queue.o 00:04:37.873 CXX test/cpp_headers/reduce.o 00:04:37.873 CXX test/cpp_headers/rpc.o 00:04:37.873 CXX test/cpp_headers/scheduler.o 00:04:37.873 CXX test/cpp_headers/scsi.o 00:04:37.873 LINK cuse 00:04:37.873 CXX test/cpp_headers/scsi_spec.o 00:04:38.131 CXX test/cpp_headers/sock.o 00:04:38.131 CXX test/cpp_headers/stdinc.o 00:04:38.131 CXX test/cpp_headers/string.o 00:04:38.131 CXX test/cpp_headers/thread.o 00:04:38.131 CXX test/cpp_headers/trace.o 00:04:38.131 CXX test/cpp_headers/trace_parser.o 00:04:38.131 CXX test/cpp_headers/tree.o 00:04:38.131 CXX test/cpp_headers/ublk.o 00:04:38.131 CXX test/cpp_headers/util.o 00:04:38.131 CXX test/cpp_headers/uuid.o 00:04:38.389 CXX test/cpp_headers/version.o 00:04:38.389 CXX test/cpp_headers/vfio_user_pci.o 00:04:38.389 CXX test/cpp_headers/vfio_user_spec.o 00:04:38.389 CXX test/cpp_headers/vhost.o 00:04:38.389 CXX test/cpp_headers/vmd.o 00:04:38.389 CXX test/cpp_headers/xor.o 00:04:38.389 CXX test/cpp_headers/zipf.o 00:04:38.954 LINK esnap 00:04:42.283 00:04:42.283 real 0m55.340s 00:04:42.283 user 5m6.081s 00:04:42.283 sys 1m8.301s 00:04:42.283 06:22:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:42.283 06:22:34 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.283 ************************************ 00:04:42.283 END TEST make 00:04:42.283 ************************************ 00:04:42.283 06:22:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.283 06:22:34 -- nvmf/common.sh@7 -- # uname -s 00:04:42.283 06:22:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.283 06:22:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.283 06:22:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.283 06:22:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.283 06:22:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.283 06:22:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.283 06:22:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.283 06:22:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.283 06:22:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.283 06:22:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.283 06:22:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:04:42.283 06:22:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:04:42.283 06:22:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.283 06:22:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.283 06:22:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:42.283 06:22:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.283 06:22:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.283 06:22:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.283 06:22:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.283 06:22:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.283 06:22:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.283 06:22:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.283 06:22:34 -- paths/export.sh@5 -- # export PATH 00:04:42.283 06:22:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.283 06:22:34 -- nvmf/common.sh@46 -- # : 0 00:04:42.283 06:22:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:42.283 06:22:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:42.283 06:22:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:42.283 06:22:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.283 06:22:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.283 06:22:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:42.283 06:22:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:42.283 06:22:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:42.283 06:22:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:42.283 06:22:34 -- spdk/autotest.sh@32 -- # uname -s 00:04:42.283 06:22:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:42.283 06:22:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:42.283 06:22:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:42.283 06:22:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:42.283 06:22:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:42.283 06:22:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:42.283 06:22:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:42.283 06:22:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:42.283 06:22:34 -- spdk/autotest.sh@48 -- # udevadm_pid=61503 00:04:42.283 06:22:34 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:42.283 06:22:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:42.283 06:22:34 -- spdk/autotest.sh@54 -- # echo 61520 00:04:42.283 06:22:34 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:42.283 06:22:34 -- spdk/autotest.sh@56 -- # echo 61526 00:04:42.283 06:22:34 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:42.283 06:22:34 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:42.284 06:22:34 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:42.284 06:22:34 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:42.284 06:22:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:42.284 06:22:34 -- common/autotest_common.sh@10 -- # set +x 00:04:42.284 06:22:34 -- spdk/autotest.sh@70 -- # create_test_list 00:04:42.284 06:22:34 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:42.284 06:22:34 -- common/autotest_common.sh@10 -- # set +x 00:04:42.284 06:22:34 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:42.284 06:22:34 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:42.284 06:22:34 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:42.284 06:22:34 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:42.284 06:22:34 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:42.284 06:22:34 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:42.284 06:22:34 -- common/autotest_common.sh@1440 -- # uname 00:04:42.284 06:22:34 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:42.284 06:22:34 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:42.284 06:22:34 -- common/autotest_common.sh@1460 -- # uname 00:04:42.284 06:22:34 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:42.284 06:22:34 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:42.284 06:22:34 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:42.284 06:22:34 -- spdk/autotest.sh@83 -- # hash lcov 00:04:42.284 06:22:34 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:42.284 06:22:34 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:42.284 --rc lcov_branch_coverage=1 00:04:42.284 --rc lcov_function_coverage=1 00:04:42.284 --rc genhtml_branch_coverage=1 00:04:42.284 --rc genhtml_function_coverage=1 00:04:42.284 --rc genhtml_legend=1 00:04:42.284 --rc geninfo_all_blocks=1 00:04:42.284 ' 00:04:42.284 06:22:34 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:42.284 --rc lcov_branch_coverage=1 00:04:42.284 --rc lcov_function_coverage=1 00:04:42.284 --rc genhtml_branch_coverage=1 00:04:42.284 --rc genhtml_function_coverage=1 00:04:42.284 --rc genhtml_legend=1 00:04:42.284 --rc geninfo_all_blocks=1 00:04:42.284 ' 00:04:42.284 06:22:34 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:42.284 --rc lcov_branch_coverage=1 00:04:42.284 --rc lcov_function_coverage=1 00:04:42.284 --rc genhtml_branch_coverage=1 00:04:42.284 --rc genhtml_function_coverage=1 00:04:42.284 --rc genhtml_legend=1 00:04:42.284 --rc geninfo_all_blocks=1 00:04:42.284 --no-external' 00:04:42.284 06:22:34 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:42.284 --rc lcov_branch_coverage=1 00:04:42.284 --rc lcov_function_coverage=1 00:04:42.284 --rc genhtml_branch_coverage=1 00:04:42.284 --rc genhtml_function_coverage=1 00:04:42.284 --rc genhtml_legend=1 00:04:42.284 --rc geninfo_all_blocks=1 00:04:42.284 --no-external' 00:04:42.284 06:22:34 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:42.284 lcov: LCOV version 1.15 00:04:42.284 06:22:34 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:50.402 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:50.402 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:50.402 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:50.402 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:50.402 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:50.402 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:08.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:08.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:08.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:08.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:08.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:08.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:11.026 06:23:03 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:11.026 06:23:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.026 06:23:03 -- common/autotest_common.sh@10 -- # set +x 00:05:11.026 06:23:03 -- spdk/autotest.sh@102 -- # rm -f 00:05:11.026 06:23:03 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.285 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:11.285 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:11.285 06:23:03 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:11.285 06:23:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:11.285 06:23:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:11.285 06:23:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:11.285 06:23:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.285 06:23:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:11.285 06:23:03 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:11.285 06:23:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.285 06:23:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.285 06:23:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.285 06:23:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:11.285 06:23:03 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:11.285 06:23:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.285 06:23:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.285 06:23:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.285 06:23:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:11.285 06:23:03 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:11.285 06:23:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:11.285 06:23:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.285 06:23:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.285 06:23:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:11.285 06:23:03 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:11.285 06:23:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:11.285 06:23:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.285 06:23:03 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:11.285 06:23:03 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:11.285 06:23:03 -- spdk/autotest.sh@121 -- # grep -v p 00:05:11.285 06:23:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.285 06:23:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.285 06:23:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:11.285 06:23:03 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:11.285 06:23:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:11.285 No valid GPT data, bailing 00:05:11.285 06:23:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.285 06:23:03 -- scripts/common.sh@393 -- # pt= 00:05:11.285 06:23:03 -- scripts/common.sh@394 -- # return 1 00:05:11.285 06:23:03 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:11.285 1+0 records in 00:05:11.285 1+0 records out 00:05:11.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478797 s, 219 MB/s 00:05:11.285 06:23:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.285 06:23:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.285 06:23:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:05:11.285 06:23:03 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:11.285 06:23:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:11.544 No valid GPT data, bailing 00:05:11.544 06:23:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.544 06:23:04 -- scripts/common.sh@393 -- # pt= 00:05:11.544 06:23:04 -- scripts/common.sh@394 -- # return 1 00:05:11.544 06:23:04 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:11.544 1+0 records in 00:05:11.544 1+0 records out 00:05:11.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491635 s, 213 MB/s 00:05:11.544 06:23:04 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.544 06:23:04 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.544 06:23:04 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:05:11.544 06:23:04 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:11.544 06:23:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:11.544 No valid GPT data, bailing 00:05:11.544 06:23:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:11.544 06:23:04 -- scripts/common.sh@393 -- # pt= 00:05:11.544 06:23:04 -- scripts/common.sh@394 -- # return 1 00:05:11.544 06:23:04 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:11.544 1+0 records in 00:05:11.544 1+0 records out 00:05:11.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460634 s, 228 MB/s 00:05:11.544 06:23:04 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.544 06:23:04 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.544 06:23:04 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:05:11.544 06:23:04 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:11.544 06:23:04 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:11.544 No valid GPT data, bailing 00:05:11.544 06:23:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:11.544 06:23:04 -- scripts/common.sh@393 -- # pt= 00:05:11.544 06:23:04 -- scripts/common.sh@394 -- # return 1 00:05:11.544 06:23:04 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:11.544 1+0 records in 00:05:11.544 1+0 records out 00:05:11.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045298 s, 231 MB/s 00:05:11.544 06:23:04 -- spdk/autotest.sh@129 -- # sync 00:05:11.802 06:23:04 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.802 06:23:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.802 06:23:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:13.704 06:23:06 -- spdk/autotest.sh@135 -- # uname -s 00:05:13.704 06:23:06 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:13.704 06:23:06 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:13.704 06:23:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.704 06:23:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.704 06:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.704 ************************************ 00:05:13.704 START TEST setup.sh 00:05:13.704 ************************************ 00:05:13.704 06:23:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:13.705 * Looking for test storage... 00:05:13.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.705 06:23:06 -- setup/test-setup.sh@10 -- # uname -s 00:05:13.705 06:23:06 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:13.705 06:23:06 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:13.705 06:23:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.705 06:23:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.705 06:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.705 ************************************ 00:05:13.705 START TEST acl 00:05:13.705 ************************************ 00:05:13.705 06:23:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:13.705 * Looking for test storage... 00:05:13.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.705 06:23:06 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:13.705 06:23:06 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:13.705 06:23:06 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:13.705 06:23:06 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:13.705 06:23:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.705 06:23:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:13.705 06:23:06 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:13.705 06:23:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.705 06:23:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.705 06:23:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.705 06:23:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:13.705 06:23:06 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:13.705 06:23:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.705 06:23:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.705 06:23:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.705 06:23:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:13.705 06:23:06 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:13.705 06:23:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:13.705 06:23:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.705 06:23:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.705 06:23:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:13.705 06:23:06 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:13.705 06:23:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:13.705 06:23:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.705 06:23:06 -- setup/acl.sh@12 -- # devs=() 00:05:13.705 06:23:06 -- setup/acl.sh@12 -- # declare -a devs 00:05:13.705 06:23:06 -- setup/acl.sh@13 -- # drivers=() 00:05:13.705 06:23:06 -- setup/acl.sh@13 -- # declare -A drivers 00:05:13.705 06:23:06 -- setup/acl.sh@51 -- # setup reset 00:05:13.705 06:23:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.705 06:23:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.638 06:23:07 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:14.638 06:23:07 -- setup/acl.sh@16 -- # local dev driver 00:05:14.638 06:23:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.638 06:23:07 -- setup/acl.sh@15 -- # setup output status 00:05:14.638 06:23:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.638 06:23:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:14.638 Hugepages 00:05:14.638 node hugesize free / total 00:05:14.638 06:23:07 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:14.638 06:23:07 -- setup/acl.sh@19 -- # continue 00:05:14.638 06:23:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.638 00:05:14.638 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.638 06:23:07 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:14.638 06:23:07 -- setup/acl.sh@19 -- # continue 00:05:14.638 06:23:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.638 06:23:07 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:14.638 06:23:07 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:14.638 06:23:07 -- setup/acl.sh@20 -- # continue 00:05:14.638 06:23:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.897 06:23:07 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:14.897 06:23:07 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:14.897 06:23:07 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:14.897 06:23:07 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:14.897 06:23:07 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:14.897 06:23:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.897 06:23:07 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:14.897 06:23:07 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:14.897 06:23:07 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:14.897 06:23:07 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:14.897 06:23:07 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:14.897 06:23:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.897 06:23:07 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:14.897 06:23:07 -- setup/acl.sh@54 -- # run_test denied denied 00:05:14.897 06:23:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.897 06:23:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.897 06:23:07 -- common/autotest_common.sh@10 -- # set +x 00:05:14.897 ************************************ 00:05:14.897 START TEST denied 00:05:14.897 ************************************ 00:05:14.897 06:23:07 -- common/autotest_common.sh@1104 -- # denied 00:05:14.897 06:23:07 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:14.897 06:23:07 -- setup/acl.sh@38 -- # setup output config 00:05:14.897 06:23:07 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:14.897 06:23:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.897 06:23:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.831 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:15.831 06:23:08 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:15.831 06:23:08 -- setup/acl.sh@28 -- # local dev driver 00:05:15.831 06:23:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:15.831 06:23:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:15.831 06:23:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:15.831 06:23:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:15.831 06:23:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:15.831 06:23:08 -- setup/acl.sh@41 -- # setup reset 00:05:15.831 06:23:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.831 06:23:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.398 00:05:16.398 real 0m1.478s 00:05:16.398 user 0m0.597s 00:05:16.398 sys 0m0.815s 00:05:16.398 06:23:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.398 06:23:08 -- common/autotest_common.sh@10 -- # set +x 00:05:16.398 ************************************ 00:05:16.398 END TEST denied 00:05:16.398 ************************************ 00:05:16.398 06:23:08 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:16.398 06:23:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.398 06:23:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.398 06:23:08 -- common/autotest_common.sh@10 -- # set +x 00:05:16.398 ************************************ 00:05:16.398 START TEST allowed 00:05:16.398 ************************************ 00:05:16.398 06:23:08 -- common/autotest_common.sh@1104 -- # allowed 00:05:16.398 06:23:08 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:16.398 06:23:08 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:16.398 06:23:08 -- setup/acl.sh@45 -- # setup output config 00:05:16.398 06:23:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.398 06:23:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.334 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.334 06:23:09 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:17.334 06:23:09 -- setup/acl.sh@28 -- # local dev driver 00:05:17.334 06:23:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:17.334 06:23:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:17.334 06:23:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:17.334 06:23:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:17.334 06:23:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:17.334 06:23:09 -- setup/acl.sh@48 -- # setup reset 00:05:17.334 06:23:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.334 06:23:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.901 00:05:17.901 real 0m1.520s 00:05:17.901 user 0m0.669s 00:05:17.901 sys 0m0.847s 00:05:17.901 06:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.901 06:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:17.901 ************************************ 00:05:17.901 END TEST allowed 00:05:17.901 ************************************ 00:05:17.901 00:05:17.901 real 0m4.272s 00:05:17.901 user 0m1.839s 00:05:17.901 sys 0m2.384s 00:05:17.901 06:23:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.901 06:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:17.901 ************************************ 00:05:17.901 END TEST acl 00:05:17.901 ************************************ 00:05:17.901 06:23:10 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:17.901 06:23:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.901 06:23:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.901 06:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:17.901 ************************************ 00:05:17.901 START TEST hugepages 00:05:17.901 ************************************ 00:05:17.901 06:23:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:18.160 * Looking for test storage... 00:05:18.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.160 06:23:10 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:18.160 06:23:10 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:18.161 06:23:10 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:18.161 06:23:10 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:18.161 06:23:10 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:18.161 06:23:10 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:18.161 06:23:10 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:18.161 06:23:10 -- setup/common.sh@18 -- # local node= 00:05:18.161 06:23:10 -- setup/common.sh@19 -- # local var val 00:05:18.161 06:23:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.161 06:23:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.161 06:23:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.161 06:23:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.161 06:23:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.161 06:23:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 4747024 kB' 'MemAvailable: 7376852 kB' 'Buffers: 2684 kB' 'Cached: 2831572 kB' 'SwapCached: 0 kB' 'Active: 481936 kB' 'Inactive: 2454576 kB' 'Active(anon): 112768 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 104152 kB' 'Mapped: 50928 kB' 'Shmem: 10512 kB' 'KReclaimable: 86060 kB' 'Slab: 187824 kB' 'SReclaimable: 86060 kB' 'SUnreclaim: 101764 kB' 'KernelStack: 6828 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 304240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.161 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.161 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # continue 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.162 06:23:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.162 06:23:10 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:18.162 06:23:10 -- setup/common.sh@33 -- # echo 2048 00:05:18.162 06:23:10 -- setup/common.sh@33 -- # return 0 00:05:18.162 06:23:10 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:18.162 06:23:10 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:18.162 06:23:10 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:18.162 06:23:10 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:18.162 06:23:10 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:18.162 06:23:10 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:18.162 06:23:10 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:18.162 06:23:10 -- setup/hugepages.sh@207 -- # get_nodes 00:05:18.162 06:23:10 -- setup/hugepages.sh@27 -- # local node 00:05:18.162 06:23:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.162 06:23:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:18.162 06:23:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.162 06:23:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.162 06:23:10 -- setup/hugepages.sh@208 -- # clear_hp 00:05:18.162 06:23:10 -- setup/hugepages.sh@37 -- # local node hp 00:05:18.162 06:23:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.162 06:23:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.162 06:23:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.162 06:23:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.162 06:23:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.162 06:23:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:18.162 06:23:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:18.162 06:23:10 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:18.162 06:23:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.162 06:23:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.162 06:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:18.162 ************************************ 00:05:18.162 START TEST default_setup 00:05:18.162 ************************************ 00:05:18.162 06:23:10 -- common/autotest_common.sh@1104 -- # default_setup 00:05:18.162 06:23:10 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:18.162 06:23:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:18.162 06:23:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:18.162 06:23:10 -- setup/hugepages.sh@51 -- # shift 00:05:18.162 06:23:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:18.162 06:23:10 -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.162 06:23:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.162 06:23:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:18.162 06:23:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:18.162 06:23:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:18.162 06:23:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.162 06:23:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:18.162 06:23:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.162 06:23:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.162 06:23:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.162 06:23:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:18.162 06:23:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.162 06:23:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:18.162 06:23:10 -- setup/hugepages.sh@73 -- # return 0 00:05:18.162 06:23:10 -- setup/hugepages.sh@137 -- # setup output 00:05:18.162 06:23:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.162 06:23:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.781 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.043 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.043 06:23:11 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:19.043 06:23:11 -- setup/hugepages.sh@89 -- # local node 00:05:19.043 06:23:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.043 06:23:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.043 06:23:11 -- setup/hugepages.sh@92 -- # local surp 00:05:19.043 06:23:11 -- setup/hugepages.sh@93 -- # local resv 00:05:19.043 06:23:11 -- setup/hugepages.sh@94 -- # local anon 00:05:19.043 06:23:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.043 06:23:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.043 06:23:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.043 06:23:11 -- setup/common.sh@18 -- # local node= 00:05:19.043 06:23:11 -- setup/common.sh@19 -- # local var val 00:05:19.043 06:23:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.043 06:23:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.043 06:23:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.043 06:23:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.043 06:23:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.043 06:23:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.043 06:23:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831688 kB' 'MemAvailable: 9461344 kB' 'Buffers: 2684 kB' 'Cached: 2831564 kB' 'SwapCached: 0 kB' 'Active: 498272 kB' 'Inactive: 2454584 kB' 'Active(anon): 129104 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120220 kB' 'Mapped: 51092 kB' 'Shmem: 10488 kB' 'KReclaimable: 85700 kB' 'Slab: 187588 kB' 'SReclaimable: 85700 kB' 'SUnreclaim: 101888 kB' 'KernelStack: 6816 kB' 'PageTables: 4640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.043 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.043 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.044 06:23:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.044 06:23:11 -- setup/common.sh@33 -- # echo 0 00:05:19.044 06:23:11 -- setup/common.sh@33 -- # return 0 00:05:19.044 06:23:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.044 06:23:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.044 06:23:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.044 06:23:11 -- setup/common.sh@18 -- # local node= 00:05:19.044 06:23:11 -- setup/common.sh@19 -- # local var val 00:05:19.044 06:23:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.044 06:23:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.044 06:23:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.044 06:23:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.044 06:23:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.044 06:23:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.044 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.044 06:23:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831504 kB' 'MemAvailable: 9461160 kB' 'Buffers: 2684 kB' 'Cached: 2831564 kB' 'SwapCached: 0 kB' 'Active: 497976 kB' 'Inactive: 2454584 kB' 'Active(anon): 128808 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119900 kB' 'Mapped: 51084 kB' 'Shmem: 10488 kB' 'KReclaimable: 85700 kB' 'Slab: 187568 kB' 'SReclaimable: 85700 kB' 'SUnreclaim: 101868 kB' 'KernelStack: 6720 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.045 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.045 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.046 06:23:11 -- setup/common.sh@33 -- # echo 0 00:05:19.046 06:23:11 -- setup/common.sh@33 -- # return 0 00:05:19.046 06:23:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.046 06:23:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.046 06:23:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.046 06:23:11 -- setup/common.sh@18 -- # local node= 00:05:19.046 06:23:11 -- setup/common.sh@19 -- # local var val 00:05:19.046 06:23:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.046 06:23:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.046 06:23:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.046 06:23:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.046 06:23:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.046 06:23:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831596 kB' 'MemAvailable: 9461264 kB' 'Buffers: 2684 kB' 'Cached: 2831564 kB' 'SwapCached: 0 kB' 'Active: 497312 kB' 'Inactive: 2454588 kB' 'Active(anon): 128144 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119480 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85716 kB' 'Slab: 187556 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6720 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.046 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.046 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.047 06:23:11 -- setup/common.sh@33 -- # echo 0 00:05:19.047 06:23:11 -- setup/common.sh@33 -- # return 0 00:05:19.047 06:23:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.047 nr_hugepages=1024 00:05:19.047 06:23:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.047 resv_hugepages=0 00:05:19.047 06:23:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.047 surplus_hugepages=0 00:05:19.047 06:23:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.047 anon_hugepages=0 00:05:19.047 06:23:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.047 06:23:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.047 06:23:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.047 06:23:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.047 06:23:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.047 06:23:11 -- setup/common.sh@18 -- # local node= 00:05:19.047 06:23:11 -- setup/common.sh@19 -- # local var val 00:05:19.047 06:23:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.047 06:23:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.047 06:23:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.047 06:23:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.047 06:23:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.047 06:23:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831596 kB' 'MemAvailable: 9461264 kB' 'Buffers: 2684 kB' 'Cached: 2831564 kB' 'SwapCached: 0 kB' 'Active: 497572 kB' 'Inactive: 2454588 kB' 'Active(anon): 128404 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119480 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85716 kB' 'Slab: 187556 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6720 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.047 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.047 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.048 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.048 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.049 06:23:11 -- setup/common.sh@33 -- # echo 1024 00:05:19.049 06:23:11 -- setup/common.sh@33 -- # return 0 00:05:19.049 06:23:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.049 06:23:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.049 06:23:11 -- setup/hugepages.sh@27 -- # local node 00:05:19.049 06:23:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.049 06:23:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.049 06:23:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.049 06:23:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.049 06:23:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.049 06:23:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.049 06:23:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.049 06:23:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.049 06:23:11 -- setup/common.sh@18 -- # local node=0 00:05:19.049 06:23:11 -- setup/common.sh@19 -- # local var val 00:05:19.049 06:23:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.049 06:23:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.049 06:23:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.049 06:23:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.049 06:23:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.049 06:23:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831596 kB' 'MemUsed: 5407520 kB' 'SwapCached: 0 kB' 'Active: 497536 kB' 'Inactive: 2454588 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 2834248 kB' 'Mapped: 50928 kB' 'AnonPages: 119444 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85716 kB' 'Slab: 187556 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.049 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.049 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # continue 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.050 06:23:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.050 06:23:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.050 06:23:11 -- setup/common.sh@33 -- # echo 0 00:05:19.050 06:23:11 -- setup/common.sh@33 -- # return 0 00:05:19.050 06:23:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.050 06:23:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.050 06:23:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.050 06:23:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.050 node0=1024 expecting 1024 00:05:19.050 06:23:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.050 06:23:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.050 00:05:19.050 real 0m0.987s 00:05:19.050 user 0m0.476s 00:05:19.050 sys 0m0.447s 00:05:19.050 06:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.050 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:05:19.050 ************************************ 00:05:19.050 END TEST default_setup 00:05:19.050 ************************************ 00:05:19.309 06:23:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:19.309 06:23:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.309 06:23:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.309 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:05:19.309 ************************************ 00:05:19.309 START TEST per_node_1G_alloc 00:05:19.309 ************************************ 00:05:19.309 06:23:11 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:19.309 06:23:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:19.309 06:23:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:19.309 06:23:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:19.309 06:23:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.309 06:23:11 -- setup/hugepages.sh@51 -- # shift 00:05:19.309 06:23:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.309 06:23:11 -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.309 06:23:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.309 06:23:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:19.309 06:23:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.309 06:23:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.309 06:23:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.309 06:23:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:19.309 06:23:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.309 06:23:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.309 06:23:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.309 06:23:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.309 06:23:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.309 06:23:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:19.309 06:23:11 -- setup/hugepages.sh@73 -- # return 0 00:05:19.309 06:23:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:19.309 06:23:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:19.309 06:23:11 -- setup/hugepages.sh@146 -- # setup output 00:05:19.309 06:23:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.309 06:23:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.570 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.570 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.570 06:23:12 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:19.570 06:23:12 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:19.570 06:23:12 -- setup/hugepages.sh@89 -- # local node 00:05:19.570 06:23:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.570 06:23:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.570 06:23:12 -- setup/hugepages.sh@92 -- # local surp 00:05:19.570 06:23:12 -- setup/hugepages.sh@93 -- # local resv 00:05:19.570 06:23:12 -- setup/hugepages.sh@94 -- # local anon 00:05:19.570 06:23:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.570 06:23:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.570 06:23:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.570 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:19.570 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:19.570 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.570 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.570 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.570 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.570 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.570 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.570 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.570 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7879904 kB' 'MemAvailable: 10509584 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 498056 kB' 'Inactive: 2454596 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119992 kB' 'Mapped: 51052 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187552 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101824 kB' 'KernelStack: 6776 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.571 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.571 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.572 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:19.572 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:19.572 06:23:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.572 06:23:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.572 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.572 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:19.572 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:19.572 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.572 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.572 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.572 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.572 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.572 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7879904 kB' 'MemAvailable: 10509584 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497820 kB' 'Inactive: 2454596 kB' 'Active(anon): 128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119736 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187556 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101828 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.572 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.572 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.573 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:19.573 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:19.573 06:23:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.573 06:23:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.573 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.573 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:19.573 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:19.573 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.573 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.573 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.573 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.573 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.573 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7879904 kB' 'MemAvailable: 10509584 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497620 kB' 'Inactive: 2454596 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119532 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187556 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101828 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.573 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.573 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.574 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.574 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.574 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:19.574 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:19.574 06:23:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.574 06:23:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:19.574 nr_hugepages=512 00:05:19.574 06:23:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.574 resv_hugepages=0 00:05:19.574 06:23:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.574 surplus_hugepages=0 00:05:19.574 06:23:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.574 anon_hugepages=0 00:05:19.574 06:23:12 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.575 06:23:12 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:19.575 06:23:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.575 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.575 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:19.575 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:19.575 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.575 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.575 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.575 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.575 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.575 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.575 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.575 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7879904 kB' 'MemAvailable: 10509584 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497612 kB' 'Inactive: 2454596 kB' 'Active(anon): 128444 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119528 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187556 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101828 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:19.575 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.575 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.575 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.575 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.575 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.575 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.575 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.575 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.575 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.575 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.575 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.575 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.835 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.835 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.836 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.836 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.837 06:23:12 -- setup/common.sh@33 -- # echo 512 00:05:19.837 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:19.837 06:23:12 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.837 06:23:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.837 06:23:12 -- setup/hugepages.sh@27 -- # local node 00:05:19.837 06:23:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.837 06:23:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:19.837 06:23:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.837 06:23:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.837 06:23:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.837 06:23:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.837 06:23:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.837 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.837 06:23:12 -- setup/common.sh@18 -- # local node=0 00:05:19.837 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:19.837 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.837 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.837 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.837 06:23:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.837 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.837 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7879904 kB' 'MemUsed: 4359212 kB' 'SwapCached: 0 kB' 'Active: 497736 kB' 'Inactive: 2454596 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 2834252 kB' 'Mapped: 50928 kB' 'AnonPages: 119432 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85728 kB' 'Slab: 187556 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.837 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.837 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # continue 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.838 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.838 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.838 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:19.838 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:19.838 06:23:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.838 06:23:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.838 06:23:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.838 06:23:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.838 06:23:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:19.838 node0=512 expecting 512 00:05:19.838 06:23:12 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:19.838 00:05:19.838 real 0m0.564s 00:05:19.838 user 0m0.264s 00:05:19.838 sys 0m0.311s 00:05:19.838 06:23:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.838 06:23:12 -- common/autotest_common.sh@10 -- # set +x 00:05:19.838 ************************************ 00:05:19.838 END TEST per_node_1G_alloc 00:05:19.838 ************************************ 00:05:19.838 06:23:12 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:19.838 06:23:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.838 06:23:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.838 06:23:12 -- common/autotest_common.sh@10 -- # set +x 00:05:19.838 ************************************ 00:05:19.838 START TEST even_2G_alloc 00:05:19.838 ************************************ 00:05:19.838 06:23:12 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:19.838 06:23:12 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:19.838 06:23:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.838 06:23:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:19.839 06:23:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.839 06:23:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.839 06:23:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:19.839 06:23:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:19.839 06:23:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.839 06:23:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.839 06:23:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.839 06:23:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.839 06:23:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.839 06:23:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:19.839 06:23:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:19.839 06:23:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.839 06:23:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:19.839 06:23:12 -- setup/hugepages.sh@83 -- # : 0 00:05:19.839 06:23:12 -- setup/hugepages.sh@84 -- # : 0 00:05:19.839 06:23:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.839 06:23:12 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:19.839 06:23:12 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:19.839 06:23:12 -- setup/hugepages.sh@153 -- # setup output 00:05:19.839 06:23:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.839 06:23:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.098 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.098 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.098 06:23:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:20.098 06:23:12 -- setup/hugepages.sh@89 -- # local node 00:05:20.098 06:23:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.098 06:23:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.098 06:23:12 -- setup/hugepages.sh@92 -- # local surp 00:05:20.098 06:23:12 -- setup/hugepages.sh@93 -- # local resv 00:05:20.098 06:23:12 -- setup/hugepages.sh@94 -- # local anon 00:05:20.098 06:23:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.098 06:23:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.098 06:23:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.098 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:20.098 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:20.098 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.099 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.099 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.099 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.099 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.099 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831212 kB' 'MemAvailable: 9460892 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 498292 kB' 'Inactive: 2454596 kB' 'Active(anon): 129124 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120020 kB' 'Mapped: 51264 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187568 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6776 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.099 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.099 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.361 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.361 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:20.361 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:20.361 06:23:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.361 06:23:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.361 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.361 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:20.361 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:20.361 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.361 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.361 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.361 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.361 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.361 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.361 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831464 kB' 'MemAvailable: 9461144 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497572 kB' 'Inactive: 2454596 kB' 'Active(anon): 128404 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119488 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187576 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101848 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.362 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.362 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.363 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:20.363 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:20.363 06:23:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.363 06:23:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.363 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.363 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:20.363 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:20.363 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.363 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.363 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.363 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.363 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.363 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831464 kB' 'MemAvailable: 9461144 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497780 kB' 'Inactive: 2454596 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120024 kB' 'Mapped: 51188 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187576 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101848 kB' 'KernelStack: 6752 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.363 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.363 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.364 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:20.364 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:20.364 nr_hugepages=1024 00:05:20.364 resv_hugepages=0 00:05:20.364 surplus_hugepages=0 00:05:20.364 anon_hugepages=0 00:05:20.364 06:23:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.364 06:23:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.364 06:23:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.364 06:23:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.364 06:23:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.364 06:23:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.364 06:23:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.364 06:23:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.364 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.364 06:23:12 -- setup/common.sh@18 -- # local node= 00:05:20.364 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:20.364 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.364 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.364 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.364 06:23:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.364 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.364 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.364 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831724 kB' 'MemAvailable: 9461404 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497764 kB' 'Inactive: 2454596 kB' 'Active(anon): 128596 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119784 kB' 'Mapped: 51188 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187576 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101848 kB' 'KernelStack: 6804 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.364 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.364 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.365 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.365 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.366 06:23:12 -- setup/common.sh@33 -- # echo 1024 00:05:20.366 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:20.366 06:23:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.366 06:23:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.366 06:23:12 -- setup/hugepages.sh@27 -- # local node 00:05:20.366 06:23:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.366 06:23:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.366 06:23:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.366 06:23:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.366 06:23:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.366 06:23:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.366 06:23:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.366 06:23:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.366 06:23:12 -- setup/common.sh@18 -- # local node=0 00:05:20.366 06:23:12 -- setup/common.sh@19 -- # local var val 00:05:20.366 06:23:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.366 06:23:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.366 06:23:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.366 06:23:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.366 06:23:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.366 06:23:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831612 kB' 'MemUsed: 5407504 kB' 'SwapCached: 0 kB' 'Active: 497692 kB' 'Inactive: 2454596 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 2834252 kB' 'Mapped: 50928 kB' 'AnonPages: 119620 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85728 kB' 'Slab: 187572 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.366 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.366 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # continue 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.367 06:23:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.367 06:23:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.367 06:23:12 -- setup/common.sh@33 -- # echo 0 00:05:20.367 06:23:12 -- setup/common.sh@33 -- # return 0 00:05:20.367 06:23:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.367 06:23:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.367 06:23:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.367 06:23:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.367 node0=1024 expecting 1024 00:05:20.367 ************************************ 00:05:20.367 END TEST even_2G_alloc 00:05:20.367 ************************************ 00:05:20.367 06:23:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.367 06:23:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.367 00:05:20.367 real 0m0.564s 00:05:20.367 user 0m0.268s 00:05:20.367 sys 0m0.306s 00:05:20.367 06:23:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.367 06:23:12 -- common/autotest_common.sh@10 -- # set +x 00:05:20.367 06:23:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:20.367 06:23:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.367 06:23:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.367 06:23:12 -- common/autotest_common.sh@10 -- # set +x 00:05:20.367 ************************************ 00:05:20.367 START TEST odd_alloc 00:05:20.367 ************************************ 00:05:20.367 06:23:12 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:20.367 06:23:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:20.367 06:23:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:20.367 06:23:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.367 06:23:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.367 06:23:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:20.367 06:23:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.367 06:23:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.367 06:23:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.367 06:23:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:20.367 06:23:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.367 06:23:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.367 06:23:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.367 06:23:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.367 06:23:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.367 06:23:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.367 06:23:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:20.367 06:23:12 -- setup/hugepages.sh@83 -- # : 0 00:05:20.367 06:23:12 -- setup/hugepages.sh@84 -- # : 0 00:05:20.367 06:23:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.367 06:23:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:20.367 06:23:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:20.367 06:23:12 -- setup/hugepages.sh@160 -- # setup output 00:05:20.367 06:23:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.367 06:23:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.934 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.934 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.934 06:23:13 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:20.934 06:23:13 -- setup/hugepages.sh@89 -- # local node 00:05:20.934 06:23:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.934 06:23:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.934 06:23:13 -- setup/hugepages.sh@92 -- # local surp 00:05:20.934 06:23:13 -- setup/hugepages.sh@93 -- # local resv 00:05:20.934 06:23:13 -- setup/hugepages.sh@94 -- # local anon 00:05:20.935 06:23:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.935 06:23:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.935 06:23:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.935 06:23:13 -- setup/common.sh@18 -- # local node= 00:05:20.935 06:23:13 -- setup/common.sh@19 -- # local var val 00:05:20.935 06:23:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.935 06:23:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.935 06:23:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.935 06:23:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.935 06:23:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.935 06:23:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6827480 kB' 'MemAvailable: 9457160 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 498364 kB' 'Inactive: 2454596 kB' 'Active(anon): 129196 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120184 kB' 'Mapped: 51048 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187568 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6772 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.935 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.935 06:23:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.936 06:23:13 -- setup/common.sh@33 -- # echo 0 00:05:20.936 06:23:13 -- setup/common.sh@33 -- # return 0 00:05:20.936 06:23:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.936 06:23:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.936 06:23:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.936 06:23:13 -- setup/common.sh@18 -- # local node= 00:05:20.936 06:23:13 -- setup/common.sh@19 -- # local var val 00:05:20.936 06:23:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.936 06:23:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.936 06:23:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.936 06:23:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.936 06:23:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.936 06:23:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6827736 kB' 'MemAvailable: 9457416 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497648 kB' 'Inactive: 2454596 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119608 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187580 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101852 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.936 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.936 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.937 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.937 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.938 06:23:13 -- setup/common.sh@33 -- # echo 0 00:05:20.938 06:23:13 -- setup/common.sh@33 -- # return 0 00:05:20.938 06:23:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.938 06:23:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.938 06:23:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.938 06:23:13 -- setup/common.sh@18 -- # local node= 00:05:20.938 06:23:13 -- setup/common.sh@19 -- # local var val 00:05:20.938 06:23:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.938 06:23:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.938 06:23:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.938 06:23:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.938 06:23:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.938 06:23:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6827736 kB' 'MemAvailable: 9457416 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497440 kB' 'Inactive: 2454596 kB' 'Active(anon): 128272 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187580 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101852 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.938 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.938 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.939 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.939 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.940 06:23:13 -- setup/common.sh@33 -- # echo 0 00:05:20.940 06:23:13 -- setup/common.sh@33 -- # return 0 00:05:20.940 06:23:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.940 nr_hugepages=1025 00:05:20.940 06:23:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:20.940 resv_hugepages=0 00:05:20.940 06:23:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.940 surplus_hugepages=0 00:05:20.940 06:23:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.940 anon_hugepages=0 00:05:20.940 06:23:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.940 06:23:13 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.940 06:23:13 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:20.940 06:23:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.940 06:23:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.940 06:23:13 -- setup/common.sh@18 -- # local node= 00:05:20.940 06:23:13 -- setup/common.sh@19 -- # local var val 00:05:20.940 06:23:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.940 06:23:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.940 06:23:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.940 06:23:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.940 06:23:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.940 06:23:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6827736 kB' 'MemAvailable: 9457416 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497652 kB' 'Inactive: 2454596 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187584 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101856 kB' 'KernelStack: 6752 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 320300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.940 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.940 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.941 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.941 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.942 06:23:13 -- setup/common.sh@33 -- # echo 1025 00:05:20.942 06:23:13 -- setup/common.sh@33 -- # return 0 00:05:20.942 06:23:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.942 06:23:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.942 06:23:13 -- setup/hugepages.sh@27 -- # local node 00:05:20.942 06:23:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.942 06:23:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:20.942 06:23:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.942 06:23:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.942 06:23:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.942 06:23:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.942 06:23:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.942 06:23:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.942 06:23:13 -- setup/common.sh@18 -- # local node=0 00:05:20.942 06:23:13 -- setup/common.sh@19 -- # local var val 00:05:20.942 06:23:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.942 06:23:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.942 06:23:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.942 06:23:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.942 06:23:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.942 06:23:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6827236 kB' 'MemUsed: 5411880 kB' 'SwapCached: 0 kB' 'Active: 497404 kB' 'Inactive: 2454596 kB' 'Active(anon): 128236 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 2834252 kB' 'Mapped: 50928 kB' 'AnonPages: 119340 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85728 kB' 'Slab: 187576 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.942 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.942 06:23:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # continue 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.943 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.943 06:23:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.943 06:23:13 -- setup/common.sh@33 -- # echo 0 00:05:20.943 06:23:13 -- setup/common.sh@33 -- # return 0 00:05:20.943 06:23:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.943 06:23:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.943 06:23:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.943 06:23:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.943 node0=1025 expecting 1025 00:05:20.943 06:23:13 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:20.943 06:23:13 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:20.943 00:05:20.943 real 0m0.545s 00:05:20.943 user 0m0.287s 00:05:20.943 sys 0m0.281s 00:05:20.943 06:23:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.943 06:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.943 ************************************ 00:05:20.943 END TEST odd_alloc 00:05:20.943 ************************************ 00:05:20.943 06:23:13 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:20.943 06:23:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.943 06:23:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.943 06:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.943 ************************************ 00:05:20.943 START TEST custom_alloc 00:05:20.943 ************************************ 00:05:20.943 06:23:13 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:20.943 06:23:13 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:20.943 06:23:13 -- setup/hugepages.sh@169 -- # local node 00:05:20.943 06:23:13 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:20.943 06:23:13 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:20.943 06:23:13 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:20.943 06:23:13 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:20.943 06:23:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.943 06:23:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.943 06:23:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.943 06:23:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.943 06:23:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.943 06:23:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.943 06:23:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.943 06:23:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.943 06:23:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.943 06:23:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.943 06:23:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.943 06:23:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.943 06:23:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.943 06:23:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.943 06:23:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:20.943 06:23:13 -- setup/hugepages.sh@83 -- # : 0 00:05:20.943 06:23:13 -- setup/hugepages.sh@84 -- # : 0 00:05:20.943 06:23:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.944 06:23:13 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:20.944 06:23:13 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:20.944 06:23:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:20.944 06:23:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:20.944 06:23:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:20.944 06:23:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:20.944 06:23:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.944 06:23:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.944 06:23:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.944 06:23:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.944 06:23:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.944 06:23:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.944 06:23:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.944 06:23:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:20.944 06:23:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:20.944 06:23:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:20.944 06:23:13 -- setup/hugepages.sh@78 -- # return 0 00:05:20.944 06:23:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:20.944 06:23:13 -- setup/hugepages.sh@187 -- # setup output 00:05:20.944 06:23:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.944 06:23:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.515 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.515 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.515 06:23:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:21.515 06:23:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:21.515 06:23:13 -- setup/hugepages.sh@89 -- # local node 00:05:21.515 06:23:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.515 06:23:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.515 06:23:13 -- setup/hugepages.sh@92 -- # local surp 00:05:21.515 06:23:13 -- setup/hugepages.sh@93 -- # local resv 00:05:21.515 06:23:13 -- setup/hugepages.sh@94 -- # local anon 00:05:21.515 06:23:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.515 06:23:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.515 06:23:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.515 06:23:13 -- setup/common.sh@18 -- # local node= 00:05:21.515 06:23:13 -- setup/common.sh@19 -- # local var val 00:05:21.515 06:23:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.515 06:23:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.515 06:23:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.515 06:23:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.515 06:23:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.515 06:23:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7874332 kB' 'MemAvailable: 10504012 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497780 kB' 'Inactive: 2454596 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 51004 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187588 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101860 kB' 'KernelStack: 6776 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.515 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.515 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.516 06:23:13 -- setup/common.sh@33 -- # echo 0 00:05:21.516 06:23:13 -- setup/common.sh@33 -- # return 0 00:05:21.516 06:23:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.516 06:23:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.516 06:23:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.516 06:23:13 -- setup/common.sh@18 -- # local node= 00:05:21.516 06:23:13 -- setup/common.sh@19 -- # local var val 00:05:21.516 06:23:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.516 06:23:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.516 06:23:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.516 06:23:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.516 06:23:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.516 06:23:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7874332 kB' 'MemAvailable: 10504012 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497608 kB' 'Inactive: 2454596 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119548 kB' 'Mapped: 51056 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187580 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101852 kB' 'KernelStack: 6696 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.516 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.516 06:23:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.517 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.517 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.518 06:23:14 -- setup/common.sh@33 -- # echo 0 00:05:21.518 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:21.518 06:23:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.518 06:23:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.518 06:23:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.518 06:23:14 -- setup/common.sh@18 -- # local node= 00:05:21.518 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:21.518 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.518 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.518 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.518 06:23:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.518 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.518 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7874332 kB' 'MemAvailable: 10504012 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497692 kB' 'Inactive: 2454596 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119632 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187584 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101856 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.518 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.518 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.519 06:23:14 -- setup/common.sh@33 -- # echo 0 00:05:21.519 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:21.519 06:23:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.519 nr_hugepages=512 00:05:21.519 06:23:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:21.519 resv_hugepages=0 00:05:21.519 06:23:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.519 surplus_hugepages=0 00:05:21.519 06:23:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.519 06:23:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.519 anon_hugepages=0 00:05:21.519 06:23:14 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.519 06:23:14 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:21.519 06:23:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.519 06:23:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.519 06:23:14 -- setup/common.sh@18 -- # local node= 00:05:21.519 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:21.519 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.519 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.519 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.519 06:23:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.519 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.519 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.519 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7874332 kB' 'MemAvailable: 10504012 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 497752 kB' 'Inactive: 2454596 kB' 'Active(anon): 128584 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 50928 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187580 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101852 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.519 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.519 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.520 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.520 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.521 06:23:14 -- setup/common.sh@33 -- # echo 512 00:05:21.521 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:21.521 06:23:14 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.521 06:23:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.521 06:23:14 -- setup/hugepages.sh@27 -- # local node 00:05:21.521 06:23:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.521 06:23:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.521 06:23:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.521 06:23:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.521 06:23:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.521 06:23:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.521 06:23:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.521 06:23:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.521 06:23:14 -- setup/common.sh@18 -- # local node=0 00:05:21.521 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:21.521 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.521 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.521 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.521 06:23:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.521 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.521 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7874332 kB' 'MemUsed: 4364784 kB' 'SwapCached: 0 kB' 'Active: 497448 kB' 'Inactive: 2454596 kB' 'Active(anon): 128280 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2834252 kB' 'Mapped: 50928 kB' 'AnonPages: 119644 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85728 kB' 'Slab: 187584 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.521 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.521 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # continue 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.522 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.522 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.522 06:23:14 -- setup/common.sh@33 -- # echo 0 00:05:21.522 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:21.522 06:23:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.522 06:23:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.522 06:23:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.522 06:23:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.522 node0=512 expecting 512 00:05:21.522 06:23:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:21.522 06:23:14 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:21.522 00:05:21.522 real 0m0.538s 00:05:21.522 user 0m0.282s 00:05:21.522 sys 0m0.290s 00:05:21.522 06:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.522 06:23:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 ************************************ 00:05:21.522 END TEST custom_alloc 00:05:21.522 ************************************ 00:05:21.522 06:23:14 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:21.522 06:23:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.522 06:23:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.522 06:23:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 ************************************ 00:05:21.522 START TEST no_shrink_alloc 00:05:21.522 ************************************ 00:05:21.522 06:23:14 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:21.522 06:23:14 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:21.522 06:23:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.522 06:23:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:21.522 06:23:14 -- setup/hugepages.sh@51 -- # shift 00:05:21.522 06:23:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:21.522 06:23:14 -- setup/hugepages.sh@52 -- # local node_ids 00:05:21.522 06:23:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.522 06:23:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.522 06:23:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:21.522 06:23:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:21.522 06:23:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.522 06:23:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.522 06:23:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.522 06:23:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.522 06:23:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.522 06:23:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:21.522 06:23:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:21.522 06:23:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:21.522 06:23:14 -- setup/hugepages.sh@73 -- # return 0 00:05:21.522 06:23:14 -- setup/hugepages.sh@198 -- # setup output 00:05:21.522 06:23:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.522 06:23:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.092 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.092 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.092 06:23:14 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:22.092 06:23:14 -- setup/hugepages.sh@89 -- # local node 00:05:22.092 06:23:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.092 06:23:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.092 06:23:14 -- setup/hugepages.sh@92 -- # local surp 00:05:22.092 06:23:14 -- setup/hugepages.sh@93 -- # local resv 00:05:22.092 06:23:14 -- setup/hugepages.sh@94 -- # local anon 00:05:22.092 06:23:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.092 06:23:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.092 06:23:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.092 06:23:14 -- setup/common.sh@18 -- # local node= 00:05:22.092 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:22.092 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.092 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.092 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.092 06:23:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.092 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.092 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831172 kB' 'MemAvailable: 9460852 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 498116 kB' 'Inactive: 2454596 kB' 'Active(anon): 128948 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120072 kB' 'Mapped: 51268 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187536 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101808 kB' 'KernelStack: 6780 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 320292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.092 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.092 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.093 06:23:14 -- setup/common.sh@33 -- # echo 0 00:05:22.093 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:22.093 06:23:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.093 06:23:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.093 06:23:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.093 06:23:14 -- setup/common.sh@18 -- # local node= 00:05:22.093 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:22.093 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.093 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.093 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.093 06:23:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.093 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.093 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831692 kB' 'MemAvailable: 9461372 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 496048 kB' 'Inactive: 2454596 kB' 'Active(anon): 126880 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118028 kB' 'Mapped: 50280 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187584 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101856 kB' 'KernelStack: 6748 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.093 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.093 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.094 06:23:14 -- setup/common.sh@33 -- # echo 0 00:05:22.094 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:22.094 06:23:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.094 06:23:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.094 06:23:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.094 06:23:14 -- setup/common.sh@18 -- # local node= 00:05:22.094 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:22.094 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.094 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.094 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.094 06:23:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.094 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.094 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6831740 kB' 'MemAvailable: 9461420 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 495120 kB' 'Inactive: 2454596 kB' 'Active(anon): 125952 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117116 kB' 'Mapped: 50088 kB' 'Shmem: 10488 kB' 'KReclaimable: 85728 kB' 'Slab: 187552 kB' 'SReclaimable: 85728 kB' 'SUnreclaim: 101824 kB' 'KernelStack: 6704 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.094 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.094 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.095 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.095 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.096 06:23:14 -- setup/common.sh@33 -- # echo 0 00:05:22.096 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:22.096 06:23:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.096 nr_hugepages=1024 00:05:22.096 06:23:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.096 resv_hugepages=0 00:05:22.096 06:23:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.096 surplus_hugepages=0 00:05:22.096 06:23:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.096 anon_hugepages=0 00:05:22.096 06:23:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.096 06:23:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.096 06:23:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.096 06:23:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.096 06:23:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.096 06:23:14 -- setup/common.sh@18 -- # local node= 00:05:22.096 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:22.096 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.096 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.096 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.096 06:23:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.096 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.096 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6832000 kB' 'MemAvailable: 9461676 kB' 'Buffers: 2684 kB' 'Cached: 2831568 kB' 'SwapCached: 0 kB' 'Active: 495164 kB' 'Inactive: 2454596 kB' 'Active(anon): 125996 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116860 kB' 'Mapped: 50088 kB' 'Shmem: 10488 kB' 'KReclaimable: 85716 kB' 'Slab: 187516 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101800 kB' 'KernelStack: 6656 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.096 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.096 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.097 06:23:14 -- setup/common.sh@33 -- # echo 1024 00:05:22.097 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:22.097 06:23:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.097 06:23:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.097 06:23:14 -- setup/hugepages.sh@27 -- # local node 00:05:22.097 06:23:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.097 06:23:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.097 06:23:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.097 06:23:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.097 06:23:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.097 06:23:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.097 06:23:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.097 06:23:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.097 06:23:14 -- setup/common.sh@18 -- # local node=0 00:05:22.097 06:23:14 -- setup/common.sh@19 -- # local var val 00:05:22.097 06:23:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.097 06:23:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.097 06:23:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.097 06:23:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.097 06:23:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.097 06:23:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6832032 kB' 'MemUsed: 5407084 kB' 'SwapCached: 0 kB' 'Active: 495112 kB' 'Inactive: 2454596 kB' 'Active(anon): 125944 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454596 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2834252 kB' 'Mapped: 50088 kB' 'AnonPages: 117068 kB' 'Shmem: 10488 kB' 'KernelStack: 6640 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85716 kB' 'Slab: 187480 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.097 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.097 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # continue 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.098 06:23:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.098 06:23:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.098 06:23:14 -- setup/common.sh@33 -- # echo 0 00:05:22.098 06:23:14 -- setup/common.sh@33 -- # return 0 00:05:22.098 06:23:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.098 06:23:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.098 06:23:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.098 06:23:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.098 node0=1024 expecting 1024 00:05:22.098 06:23:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.098 06:23:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.098 06:23:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:22.098 06:23:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:22.098 06:23:14 -- setup/hugepages.sh@202 -- # setup output 00:05:22.098 06:23:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.098 06:23:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.619 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.619 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.619 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:22.619 06:23:15 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:22.619 06:23:15 -- setup/hugepages.sh@89 -- # local node 00:05:22.619 06:23:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.619 06:23:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.619 06:23:15 -- setup/hugepages.sh@92 -- # local surp 00:05:22.619 06:23:15 -- setup/hugepages.sh@93 -- # local resv 00:05:22.619 06:23:15 -- setup/hugepages.sh@94 -- # local anon 00:05:22.619 06:23:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.619 06:23:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.619 06:23:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.619 06:23:15 -- setup/common.sh@18 -- # local node= 00:05:22.619 06:23:15 -- setup/common.sh@19 -- # local var val 00:05:22.619 06:23:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.619 06:23:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.619 06:23:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.619 06:23:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.619 06:23:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.619 06:23:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6832764 kB' 'MemAvailable: 9462444 kB' 'Buffers: 2684 kB' 'Cached: 2831572 kB' 'SwapCached: 0 kB' 'Active: 495588 kB' 'Inactive: 2454600 kB' 'Active(anon): 126420 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117584 kB' 'Mapped: 50320 kB' 'Shmem: 10488 kB' 'KReclaimable: 85716 kB' 'Slab: 187396 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101680 kB' 'KernelStack: 6712 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.619 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.619 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.620 06:23:15 -- setup/common.sh@33 -- # echo 0 00:05:22.620 06:23:15 -- setup/common.sh@33 -- # return 0 00:05:22.620 06:23:15 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.620 06:23:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.620 06:23:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.620 06:23:15 -- setup/common.sh@18 -- # local node= 00:05:22.620 06:23:15 -- setup/common.sh@19 -- # local var val 00:05:22.620 06:23:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.620 06:23:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.620 06:23:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.620 06:23:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.620 06:23:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.620 06:23:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6833032 kB' 'MemAvailable: 9462712 kB' 'Buffers: 2684 kB' 'Cached: 2831572 kB' 'SwapCached: 0 kB' 'Active: 495156 kB' 'Inactive: 2454600 kB' 'Active(anon): 125988 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117136 kB' 'Mapped: 50088 kB' 'Shmem: 10488 kB' 'KReclaimable: 85716 kB' 'Slab: 187396 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101680 kB' 'KernelStack: 6656 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.620 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.620 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.621 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.621 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.622 06:23:15 -- setup/common.sh@33 -- # echo 0 00:05:22.622 06:23:15 -- setup/common.sh@33 -- # return 0 00:05:22.622 06:23:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.622 06:23:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.622 06:23:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.622 06:23:15 -- setup/common.sh@18 -- # local node= 00:05:22.622 06:23:15 -- setup/common.sh@19 -- # local var val 00:05:22.622 06:23:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.622 06:23:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.622 06:23:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.622 06:23:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.622 06:23:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.622 06:23:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6833428 kB' 'MemAvailable: 9463108 kB' 'Buffers: 2684 kB' 'Cached: 2831572 kB' 'SwapCached: 0 kB' 'Active: 495120 kB' 'Inactive: 2454600 kB' 'Active(anon): 125952 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117060 kB' 'Mapped: 50088 kB' 'Shmem: 10488 kB' 'KReclaimable: 85716 kB' 'Slab: 187396 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101680 kB' 'KernelStack: 6640 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.622 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.622 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.623 06:23:15 -- setup/common.sh@33 -- # echo 0 00:05:22.623 06:23:15 -- setup/common.sh@33 -- # return 0 00:05:22.623 06:23:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.623 nr_hugepages=1024 00:05:22.623 06:23:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.623 resv_hugepages=0 00:05:22.623 06:23:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.623 surplus_hugepages=0 00:05:22.623 06:23:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.623 anon_hugepages=0 00:05:22.623 06:23:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.623 06:23:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.623 06:23:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.623 06:23:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.623 06:23:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.623 06:23:15 -- setup/common.sh@18 -- # local node= 00:05:22.623 06:23:15 -- setup/common.sh@19 -- # local var val 00:05:22.623 06:23:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.623 06:23:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.623 06:23:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.623 06:23:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.623 06:23:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.623 06:23:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6833428 kB' 'MemAvailable: 9463108 kB' 'Buffers: 2684 kB' 'Cached: 2831572 kB' 'SwapCached: 0 kB' 'Active: 495100 kB' 'Inactive: 2454600 kB' 'Active(anon): 125932 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117036 kB' 'Mapped: 50088 kB' 'Shmem: 10488 kB' 'KReclaimable: 85716 kB' 'Slab: 187396 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101680 kB' 'KernelStack: 6640 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.623 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.623 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.624 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.624 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.625 06:23:15 -- setup/common.sh@33 -- # echo 1024 00:05:22.625 06:23:15 -- setup/common.sh@33 -- # return 0 00:05:22.625 06:23:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.625 06:23:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.625 06:23:15 -- setup/hugepages.sh@27 -- # local node 00:05:22.625 06:23:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.625 06:23:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.625 06:23:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.625 06:23:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.625 06:23:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.625 06:23:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.625 06:23:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.625 06:23:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.625 06:23:15 -- setup/common.sh@18 -- # local node=0 00:05:22.625 06:23:15 -- setup/common.sh@19 -- # local var val 00:05:22.625 06:23:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.625 06:23:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.625 06:23:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.625 06:23:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.625 06:23:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.625 06:23:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6833428 kB' 'MemUsed: 5405688 kB' 'SwapCached: 0 kB' 'Active: 495208 kB' 'Inactive: 2454600 kB' 'Active(anon): 126040 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2454600 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2834256 kB' 'Mapped: 50088 kB' 'AnonPages: 117140 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85716 kB' 'Slab: 187396 kB' 'SReclaimable: 85716 kB' 'SUnreclaim: 101680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.625 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.625 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # continue 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.626 06:23:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.626 06:23:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.626 06:23:15 -- setup/common.sh@33 -- # echo 0 00:05:22.626 06:23:15 -- setup/common.sh@33 -- # return 0 00:05:22.626 06:23:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.626 06:23:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.626 06:23:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.626 06:23:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.626 node0=1024 expecting 1024 00:05:22.626 06:23:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.626 06:23:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.626 00:05:22.626 real 0m1.049s 00:05:22.626 user 0m0.522s 00:05:22.626 sys 0m0.595s 00:05:22.626 06:23:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.626 06:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.626 ************************************ 00:05:22.626 END TEST no_shrink_alloc 00:05:22.626 ************************************ 00:05:22.626 06:23:15 -- setup/hugepages.sh@217 -- # clear_hp 00:05:22.626 06:23:15 -- setup/hugepages.sh@37 -- # local node hp 00:05:22.626 06:23:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.626 06:23:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.626 06:23:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.626 06:23:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.626 06:23:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.626 06:23:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:22.626 06:23:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:22.626 00:05:22.626 real 0m4.681s 00:05:22.626 user 0m2.260s 00:05:22.626 sys 0m2.481s 00:05:22.626 06:23:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.626 06:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.626 ************************************ 00:05:22.626 END TEST hugepages 00:05:22.626 ************************************ 00:05:22.626 06:23:15 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.626 06:23:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.626 06:23:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.626 06:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.885 ************************************ 00:05:22.885 START TEST driver 00:05:22.885 ************************************ 00:05:22.885 06:23:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.885 * Looking for test storage... 00:05:22.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.885 06:23:15 -- setup/driver.sh@68 -- # setup reset 00:05:22.885 06:23:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.885 06:23:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.453 06:23:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:23.453 06:23:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.453 06:23:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.453 06:23:15 -- common/autotest_common.sh@10 -- # set +x 00:05:23.453 ************************************ 00:05:23.453 START TEST guess_driver 00:05:23.453 ************************************ 00:05:23.453 06:23:15 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:23.453 06:23:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:23.453 06:23:15 -- setup/driver.sh@47 -- # local fail=0 00:05:23.453 06:23:15 -- setup/driver.sh@49 -- # pick_driver 00:05:23.453 06:23:15 -- setup/driver.sh@36 -- # vfio 00:05:23.453 06:23:15 -- setup/driver.sh@21 -- # local iommu_grups 00:05:23.453 06:23:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:23.453 06:23:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:23.454 06:23:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:23.454 06:23:15 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:23.454 06:23:15 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:23.454 06:23:15 -- setup/driver.sh@32 -- # return 1 00:05:23.454 06:23:15 -- setup/driver.sh@38 -- # uio 00:05:23.454 06:23:15 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:23.454 06:23:15 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:23.454 06:23:15 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:23.454 06:23:15 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:23.454 06:23:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:23.454 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:23.454 06:23:15 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:23.454 06:23:15 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:23.454 06:23:15 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:23.454 Looking for driver=uio_pci_generic 00:05:23.454 06:23:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:23.454 06:23:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.454 06:23:15 -- setup/driver.sh@45 -- # setup output config 00:05:23.454 06:23:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.454 06:23:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.020 06:23:16 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:24.020 06:23:16 -- setup/driver.sh@58 -- # continue 00:05:24.020 06:23:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.278 06:23:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.278 06:23:16 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:24.278 06:23:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.278 06:23:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.278 06:23:16 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:24.278 06:23:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.278 06:23:16 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:24.278 06:23:16 -- setup/driver.sh@65 -- # setup reset 00:05:24.278 06:23:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.278 06:23:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.844 00:05:24.844 real 0m1.414s 00:05:24.844 user 0m0.565s 00:05:24.844 sys 0m0.860s 00:05:24.844 06:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.844 ************************************ 00:05:24.844 END TEST guess_driver 00:05:24.844 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:24.844 ************************************ 00:05:24.844 00:05:24.844 real 0m2.112s 00:05:24.844 user 0m0.826s 00:05:24.844 sys 0m1.346s 00:05:24.844 06:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.844 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:24.844 ************************************ 00:05:24.844 END TEST driver 00:05:24.844 ************************************ 00:05:24.844 06:23:17 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:24.844 06:23:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.844 06:23:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.844 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:24.844 ************************************ 00:05:24.844 START TEST devices 00:05:24.844 ************************************ 00:05:24.844 06:23:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:25.103 * Looking for test storage... 00:05:25.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.103 06:23:17 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:25.103 06:23:17 -- setup/devices.sh@192 -- # setup reset 00:05:25.103 06:23:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.103 06:23:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.670 06:23:18 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:25.670 06:23:18 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:25.670 06:23:18 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:25.670 06:23:18 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:25.670 06:23:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.670 06:23:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:25.670 06:23:18 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:25.670 06:23:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.670 06:23:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.670 06:23:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.670 06:23:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:25.670 06:23:18 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:25.670 06:23:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.670 06:23:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.670 06:23:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.670 06:23:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:25.670 06:23:18 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:25.670 06:23:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:25.670 06:23:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.670 06:23:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.670 06:23:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:25.670 06:23:18 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:25.670 06:23:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:25.670 06:23:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.670 06:23:18 -- setup/devices.sh@196 -- # blocks=() 00:05:25.670 06:23:18 -- setup/devices.sh@196 -- # declare -a blocks 00:05:25.670 06:23:18 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:25.670 06:23:18 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:25.670 06:23:18 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:25.670 06:23:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.671 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:25.671 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:25.671 06:23:18 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:25.671 06:23:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:25.671 06:23:18 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:25.671 06:23:18 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:25.671 06:23:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:25.671 No valid GPT data, bailing 00:05:25.671 06:23:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.671 06:23:18 -- scripts/common.sh@393 -- # pt= 00:05:25.671 06:23:18 -- scripts/common.sh@394 -- # return 1 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:25.930 06:23:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:25.930 06:23:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:25.930 06:23:18 -- setup/common.sh@80 -- # echo 5368709120 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:25.930 06:23:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.930 06:23:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:25.930 06:23:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.930 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:25.930 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:25.930 06:23:18 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:25.930 06:23:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:25.930 06:23:18 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:25.930 06:23:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:25.930 No valid GPT data, bailing 00:05:25.930 06:23:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:25.930 06:23:18 -- scripts/common.sh@393 -- # pt= 00:05:25.930 06:23:18 -- scripts/common.sh@394 -- # return 1 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:25.930 06:23:18 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:25.930 06:23:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:25.930 06:23:18 -- setup/common.sh@80 -- # echo 4294967296 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.930 06:23:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.930 06:23:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:25.930 06:23:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.930 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:25.930 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:25.930 06:23:18 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:25.930 06:23:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:25.930 06:23:18 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:25.930 06:23:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:25.930 No valid GPT data, bailing 00:05:25.930 06:23:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:25.930 06:23:18 -- scripts/common.sh@393 -- # pt= 00:05:25.930 06:23:18 -- scripts/common.sh@394 -- # return 1 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:25.930 06:23:18 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:25.930 06:23:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:25.930 06:23:18 -- setup/common.sh@80 -- # echo 4294967296 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.930 06:23:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.930 06:23:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:25.930 06:23:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.930 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:25.930 06:23:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:25.930 06:23:18 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:25.930 06:23:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:25.930 06:23:18 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:25.930 06:23:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:25.930 No valid GPT data, bailing 00:05:25.930 06:23:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:25.930 06:23:18 -- scripts/common.sh@393 -- # pt= 00:05:25.930 06:23:18 -- scripts/common.sh@394 -- # return 1 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:25.930 06:23:18 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:25.930 06:23:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:25.930 06:23:18 -- setup/common.sh@80 -- # echo 4294967296 00:05:25.930 06:23:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.930 06:23:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.930 06:23:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:25.930 06:23:18 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:25.930 06:23:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:25.930 06:23:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:25.930 06:23:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.930 06:23:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.930 06:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:25.930 ************************************ 00:05:25.930 START TEST nvme_mount 00:05:25.930 ************************************ 00:05:25.930 06:23:18 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:25.930 06:23:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:25.930 06:23:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:25.930 06:23:18 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.930 06:23:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.930 06:23:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:25.930 06:23:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.930 06:23:18 -- setup/common.sh@40 -- # local part_no=1 00:05:25.930 06:23:18 -- setup/common.sh@41 -- # local size=1073741824 00:05:25.930 06:23:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.930 06:23:18 -- setup/common.sh@44 -- # parts=() 00:05:25.930 06:23:18 -- setup/common.sh@44 -- # local parts 00:05:25.930 06:23:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.930 06:23:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.930 06:23:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.930 06:23:18 -- setup/common.sh@46 -- # (( part++ )) 00:05:25.930 06:23:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.930 06:23:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:25.930 06:23:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.930 06:23:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:27.306 Creating new GPT entries in memory. 00:05:27.306 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:27.306 other utilities. 00:05:27.306 06:23:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:27.306 06:23:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.306 06:23:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.306 06:23:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.306 06:23:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:28.239 Creating new GPT entries in memory. 00:05:28.239 The operation has completed successfully. 00:05:28.239 06:23:20 -- setup/common.sh@57 -- # (( part++ )) 00:05:28.239 06:23:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.239 06:23:20 -- setup/common.sh@62 -- # wait 65637 00:05:28.239 06:23:20 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.239 06:23:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:28.239 06:23:20 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.239 06:23:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:28.239 06:23:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:28.239 06:23:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.239 06:23:20 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.239 06:23:20 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:28.239 06:23:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:28.239 06:23:20 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.239 06:23:20 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.239 06:23:20 -- setup/devices.sh@53 -- # local found=0 00:05:28.239 06:23:20 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.239 06:23:20 -- setup/devices.sh@56 -- # : 00:05:28.239 06:23:20 -- setup/devices.sh@59 -- # local pci status 00:05:28.239 06:23:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.239 06:23:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:28.239 06:23:20 -- setup/devices.sh@47 -- # setup output config 00:05:28.239 06:23:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.239 06:23:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.239 06:23:20 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.239 06:23:20 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:28.239 06:23:20 -- setup/devices.sh@63 -- # found=1 00:05:28.239 06:23:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.239 06:23:20 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.239 06:23:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 06:23:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.805 06:23:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 06:23:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.805 06:23:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 06:23:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.805 06:23:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:28.805 06:23:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.805 06:23:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.805 06:23:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.805 06:23:21 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:28.805 06:23:21 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.805 06:23:21 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.805 06:23:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.805 06:23:21 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:28.805 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.805 06:23:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.805 06:23:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:29.064 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.064 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:29.064 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:29.064 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:29.064 06:23:21 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:29.064 06:23:21 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:29.064 06:23:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.064 06:23:21 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:29.064 06:23:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:29.064 06:23:21 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.064 06:23:21 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.064 06:23:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:29.064 06:23:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:29.064 06:23:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.064 06:23:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.064 06:23:21 -- setup/devices.sh@53 -- # local found=0 00:05:29.064 06:23:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.064 06:23:21 -- setup/devices.sh@56 -- # : 00:05:29.064 06:23:21 -- setup/devices.sh@59 -- # local pci status 00:05:29.064 06:23:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.064 06:23:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:29.064 06:23:21 -- setup/devices.sh@47 -- # setup output config 00:05:29.064 06:23:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.064 06:23:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:29.322 06:23:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.322 06:23:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:29.322 06:23:21 -- setup/devices.sh@63 -- # found=1 00:05:29.322 06:23:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.322 06:23:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.322 06:23:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.580 06:23:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.580 06:23:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.838 06:23:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.838 06:23:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.838 06:23:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.838 06:23:22 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:29.838 06:23:22 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.838 06:23:22 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.838 06:23:22 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.838 06:23:22 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.838 06:23:22 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:29.838 06:23:22 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:29.838 06:23:22 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:29.838 06:23:22 -- setup/devices.sh@50 -- # local mount_point= 00:05:29.838 06:23:22 -- setup/devices.sh@51 -- # local test_file= 00:05:29.838 06:23:22 -- setup/devices.sh@53 -- # local found=0 00:05:29.838 06:23:22 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:29.838 06:23:22 -- setup/devices.sh@59 -- # local pci status 00:05:29.838 06:23:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.838 06:23:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:29.838 06:23:22 -- setup/devices.sh@47 -- # setup output config 00:05:29.838 06:23:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.838 06:23:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.096 06:23:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.096 06:23:22 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:30.096 06:23:22 -- setup/devices.sh@63 -- # found=1 00:05:30.096 06:23:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.096 06:23:22 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.096 06:23:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.354 06:23:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.354 06:23:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.354 06:23:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.354 06:23:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.614 06:23:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.614 06:23:23 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:30.614 06:23:23 -- setup/devices.sh@68 -- # return 0 00:05:30.614 06:23:23 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:30.614 06:23:23 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.614 06:23:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.614 06:23:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.614 06:23:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:30.614 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.614 00:05:30.614 real 0m4.496s 00:05:30.614 user 0m1.039s 00:05:30.614 sys 0m1.143s 00:05:30.614 06:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.614 06:23:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.614 ************************************ 00:05:30.614 END TEST nvme_mount 00:05:30.614 ************************************ 00:05:30.614 06:23:23 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:30.614 06:23:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.614 06:23:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.614 06:23:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.614 ************************************ 00:05:30.614 START TEST dm_mount 00:05:30.614 ************************************ 00:05:30.614 06:23:23 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:30.614 06:23:23 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:30.614 06:23:23 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:30.614 06:23:23 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:30.614 06:23:23 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:30.614 06:23:23 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.614 06:23:23 -- setup/common.sh@40 -- # local part_no=2 00:05:30.614 06:23:23 -- setup/common.sh@41 -- # local size=1073741824 00:05:30.614 06:23:23 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.614 06:23:23 -- setup/common.sh@44 -- # parts=() 00:05:30.614 06:23:23 -- setup/common.sh@44 -- # local parts 00:05:30.614 06:23:23 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.614 06:23:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.614 06:23:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.614 06:23:23 -- setup/common.sh@46 -- # (( part++ )) 00:05:30.614 06:23:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.614 06:23:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.614 06:23:23 -- setup/common.sh@46 -- # (( part++ )) 00:05:30.614 06:23:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.614 06:23:23 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:30.614 06:23:23 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.614 06:23:23 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:31.551 Creating new GPT entries in memory. 00:05:31.551 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.551 other utilities. 00:05:31.551 06:23:24 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.551 06:23:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.551 06:23:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.551 06:23:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.551 06:23:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:32.928 Creating new GPT entries in memory. 00:05:32.928 The operation has completed successfully. 00:05:32.928 06:23:25 -- setup/common.sh@57 -- # (( part++ )) 00:05:32.928 06:23:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.928 06:23:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.928 06:23:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.928 06:23:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:33.862 The operation has completed successfully. 00:05:33.862 06:23:26 -- setup/common.sh@57 -- # (( part++ )) 00:05:33.862 06:23:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.862 06:23:26 -- setup/common.sh@62 -- # wait 66095 00:05:33.862 06:23:26 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:33.862 06:23:26 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.862 06:23:26 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:33.862 06:23:26 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:33.862 06:23:26 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:33.862 06:23:26 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.862 06:23:26 -- setup/devices.sh@161 -- # break 00:05:33.862 06:23:26 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.862 06:23:26 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:33.862 06:23:26 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:33.862 06:23:26 -- setup/devices.sh@166 -- # dm=dm-0 00:05:33.862 06:23:26 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:33.862 06:23:26 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:33.862 06:23:26 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.862 06:23:26 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:33.862 06:23:26 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.862 06:23:26 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.862 06:23:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:33.862 06:23:26 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.862 06:23:26 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:33.862 06:23:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:33.862 06:23:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:33.862 06:23:26 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.862 06:23:26 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:33.862 06:23:26 -- setup/devices.sh@53 -- # local found=0 00:05:33.862 06:23:26 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.862 06:23:26 -- setup/devices.sh@56 -- # : 00:05:33.862 06:23:26 -- setup/devices.sh@59 -- # local pci status 00:05:33.862 06:23:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.862 06:23:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.862 06:23:26 -- setup/devices.sh@47 -- # setup output config 00:05:33.862 06:23:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.862 06:23:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.862 06:23:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.862 06:23:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:33.862 06:23:26 -- setup/devices.sh@63 -- # found=1 00:05:33.862 06:23:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.862 06:23:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.862 06:23:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.429 06:23:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.429 06:23:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.429 06:23:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.429 06:23:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.429 06:23:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.429 06:23:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:34.429 06:23:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.429 06:23:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:34.429 06:23:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:34.429 06:23:26 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.429 06:23:26 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:34.429 06:23:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:34.429 06:23:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:34.429 06:23:26 -- setup/devices.sh@50 -- # local mount_point= 00:05:34.429 06:23:26 -- setup/devices.sh@51 -- # local test_file= 00:05:34.429 06:23:26 -- setup/devices.sh@53 -- # local found=0 00:05:34.429 06:23:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:34.429 06:23:26 -- setup/devices.sh@59 -- # local pci status 00:05:34.429 06:23:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.429 06:23:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:34.429 06:23:26 -- setup/devices.sh@47 -- # setup output config 00:05:34.429 06:23:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.429 06:23:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.688 06:23:27 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.688 06:23:27 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:34.688 06:23:27 -- setup/devices.sh@63 -- # found=1 00:05:34.689 06:23:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.689 06:23:27 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.689 06:23:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.948 06:23:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.948 06:23:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.948 06:23:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.948 06:23:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.948 06:23:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.948 06:23:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:34.948 06:23:27 -- setup/devices.sh@68 -- # return 0 00:05:34.948 06:23:27 -- setup/devices.sh@187 -- # cleanup_dm 00:05:34.948 06:23:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.206 06:23:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.206 06:23:27 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:35.206 06:23:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.206 06:23:27 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:35.206 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.206 06:23:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.206 06:23:27 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:35.206 00:05:35.206 real 0m4.527s 00:05:35.206 user 0m0.680s 00:05:35.206 sys 0m0.786s 00:05:35.206 06:23:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.206 06:23:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.206 ************************************ 00:05:35.207 END TEST dm_mount 00:05:35.207 ************************************ 00:05:35.207 06:23:27 -- setup/devices.sh@1 -- # cleanup 00:05:35.207 06:23:27 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:35.207 06:23:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.207 06:23:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.207 06:23:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:35.207 06:23:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.207 06:23:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.466 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:35.466 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:35.466 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:35.466 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:35.466 06:23:27 -- setup/devices.sh@12 -- # cleanup_dm 00:05:35.466 06:23:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.466 06:23:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:35.466 06:23:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.466 06:23:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:35.466 06:23:27 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.466 06:23:27 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:35.466 00:05:35.466 real 0m10.532s 00:05:35.466 user 0m2.340s 00:05:35.466 sys 0m2.529s 00:05:35.466 06:23:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.466 ************************************ 00:05:35.466 END TEST devices 00:05:35.466 06:23:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.466 ************************************ 00:05:35.466 00:05:35.466 real 0m21.881s 00:05:35.466 user 0m7.368s 00:05:35.466 sys 0m8.912s 00:05:35.466 06:23:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.466 06:23:28 -- common/autotest_common.sh@10 -- # set +x 00:05:35.466 ************************************ 00:05:35.466 END TEST setup.sh 00:05:35.466 ************************************ 00:05:35.466 06:23:28 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:35.725 Hugepages 00:05:35.725 node hugesize free / total 00:05:35.725 node0 1048576kB 0 / 0 00:05:35.725 node0 2048kB 2048 / 2048 00:05:35.725 00:05:35.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.725 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:35.725 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:35.983 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:35.983 06:23:28 -- spdk/autotest.sh@141 -- # uname -s 00:05:35.983 06:23:28 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:35.983 06:23:28 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:35.983 06:23:28 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.550 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.809 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.809 06:23:29 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:37.744 06:23:30 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:37.744 06:23:30 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:37.744 06:23:30 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:37.744 06:23:30 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:37.744 06:23:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:37.744 06:23:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:37.744 06:23:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.744 06:23:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.744 06:23:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:37.744 06:23:30 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:37.744 06:23:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:37.744 06:23:30 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.312 Waiting for block devices as requested 00:05:38.312 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.312 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:38.312 06:23:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:38.312 06:23:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:38.312 06:23:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:38.312 06:23:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:38.312 06:23:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:38.312 06:23:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:38.312 06:23:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:38.312 06:23:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:38.312 06:23:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:38.312 06:23:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:38.312 06:23:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:38.312 06:23:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:38.312 06:23:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:38.312 06:23:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:38.312 06:23:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:38.312 06:23:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:38.312 06:23:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:38.312 06:23:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:38.312 06:23:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:38.312 06:23:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:38.312 06:23:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:38.312 06:23:30 -- common/autotest_common.sh@1542 -- # continue 00:05:38.312 06:23:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:38.312 06:23:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:38.571 06:23:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:38.571 06:23:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:38.571 06:23:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:38.571 06:23:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:38.571 06:23:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:38.571 06:23:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:38.571 06:23:31 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:38.571 06:23:31 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:38.571 06:23:31 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:38.571 06:23:31 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:38.571 06:23:31 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:38.571 06:23:31 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:38.571 06:23:31 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:38.571 06:23:31 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:38.571 06:23:31 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:38.571 06:23:31 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:38.571 06:23:31 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:38.571 06:23:31 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:38.571 06:23:31 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:38.571 06:23:31 -- common/autotest_common.sh@1542 -- # continue 00:05:38.571 06:23:31 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:38.571 06:23:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.571 06:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.571 06:23:31 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:38.571 06:23:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.571 06:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.571 06:23:31 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.397 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.397 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:39.397 06:23:31 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:39.397 06:23:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:39.397 06:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:39.397 06:23:31 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:39.397 06:23:31 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:39.397 06:23:31 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:39.397 06:23:31 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:39.397 06:23:31 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:39.397 06:23:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:39.397 06:23:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:39.397 06:23:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:39.397 06:23:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.397 06:23:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:39.397 06:23:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:39.397 06:23:32 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:39.397 06:23:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:39.397 06:23:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:39.397 06:23:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:39.397 06:23:32 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:39.397 06:23:32 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:39.397 06:23:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:39.397 06:23:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:39.397 06:23:32 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:39.397 06:23:32 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:39.397 06:23:32 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:39.397 06:23:32 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:39.397 06:23:32 -- common/autotest_common.sh@1578 -- # return 0 00:05:39.397 06:23:32 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:39.397 06:23:32 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:39.397 06:23:32 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:39.397 06:23:32 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:39.397 06:23:32 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:39.397 06:23:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.397 06:23:32 -- common/autotest_common.sh@10 -- # set +x 00:05:39.397 06:23:32 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:39.397 06:23:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.397 06:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.397 06:23:32 -- common/autotest_common.sh@10 -- # set +x 00:05:39.656 ************************************ 00:05:39.656 START TEST env 00:05:39.656 ************************************ 00:05:39.656 06:23:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:39.656 * Looking for test storage... 00:05:39.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:39.656 06:23:32 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:39.656 06:23:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.656 06:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.656 06:23:32 -- common/autotest_common.sh@10 -- # set +x 00:05:39.656 ************************************ 00:05:39.656 START TEST env_memory 00:05:39.656 ************************************ 00:05:39.656 06:23:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:39.656 00:05:39.656 00:05:39.656 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.656 http://cunit.sourceforge.net/ 00:05:39.656 00:05:39.656 00:05:39.656 Suite: memory 00:05:39.656 Test: alloc and free memory map ...[2024-10-04 06:23:32.236387] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:39.656 passed 00:05:39.656 Test: mem map translation ...[2024-10-04 06:23:32.267725] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:39.656 [2024-10-04 06:23:32.267948] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:39.656 [2024-10-04 06:23:32.268146] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:39.656 [2024-10-04 06:23:32.268348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:39.656 passed 00:05:39.656 Test: mem map registration ...[2024-10-04 06:23:32.332455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:39.656 [2024-10-04 06:23:32.332655] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:39.915 passed 00:05:39.915 Test: mem map adjacent registrations ...passed 00:05:39.915 00:05:39.915 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.915 suites 1 1 n/a 0 0 00:05:39.915 tests 4 4 4 0 0 00:05:39.915 asserts 152 152 152 0 n/a 00:05:39.915 00:05:39.915 Elapsed time = 0.213 seconds 00:05:39.915 00:05:39.915 real 0m0.236s 00:05:39.915 user 0m0.217s 00:05:39.915 sys 0m0.013s 00:05:39.915 06:23:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.915 06:23:32 -- common/autotest_common.sh@10 -- # set +x 00:05:39.915 ************************************ 00:05:39.915 END TEST env_memory 00:05:39.915 ************************************ 00:05:39.915 06:23:32 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.915 06:23:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.915 06:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.915 06:23:32 -- common/autotest_common.sh@10 -- # set +x 00:05:39.915 ************************************ 00:05:39.915 START TEST env_vtophys 00:05:39.915 ************************************ 00:05:39.915 06:23:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.915 EAL: lib.eal log level changed from notice to debug 00:05:39.915 EAL: Detected lcore 0 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 1 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 2 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 3 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 4 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 5 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 6 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 7 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 8 as core 0 on socket 0 00:05:39.915 EAL: Detected lcore 9 as core 0 on socket 0 00:05:39.915 EAL: Maximum logical cores by configuration: 128 00:05:39.915 EAL: Detected CPU lcores: 10 00:05:39.916 EAL: Detected NUMA nodes: 1 00:05:39.916 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:39.916 EAL: Detected shared linkage of DPDK 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:39.916 EAL: Registered [vdev] bus. 00:05:39.916 EAL: bus.vdev log level changed from disabled to notice 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:39.916 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:39.916 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:39.916 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:39.916 EAL: No shared files mode enabled, IPC will be disabled 00:05:39.916 EAL: No shared files mode enabled, IPC is disabled 00:05:39.916 EAL: Selected IOVA mode 'PA' 00:05:39.916 EAL: Probing VFIO support... 00:05:39.916 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:39.916 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:39.916 EAL: Ask a virtual area of 0x2e000 bytes 00:05:39.916 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:39.916 EAL: Setting up physically contiguous memory... 00:05:39.916 EAL: Setting maximum number of open files to 524288 00:05:39.916 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:39.916 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:39.916 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.916 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:39.916 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.916 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.916 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:39.916 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:39.916 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.916 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:39.916 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.916 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.916 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:39.916 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:39.916 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.916 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:39.916 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.916 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.916 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:39.916 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:39.916 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.916 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.916 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.916 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.916 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.916 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.916 EAL: Hugepages will be freed exactly as allocated. 00:05:39.916 EAL: No shared files mode enabled, IPC is disabled 00:05:39.916 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: TSC frequency is ~2200000 KHz 00:05:40.175 EAL: Main lcore 0 is ready (tid=7f647c010a00;cpuset=[0]) 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 0 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 2MB 00:05:40.175 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:40.175 EAL: Mem event callback 'spdk:(nil)' registered 00:05:40.175 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:40.175 00:05:40.175 00:05:40.175 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.175 http://cunit.sourceforge.net/ 00:05:40.175 00:05:40.175 00:05:40.175 Suite: components_suite 00:05:40.175 Test: vtophys_malloc_test ...passed 00:05:40.175 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 4MB 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was shrunk by 4MB 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 6MB 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was shrunk by 6MB 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 10MB 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was shrunk by 10MB 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 18MB 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was shrunk by 18MB 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 34MB 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was shrunk by 34MB 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.175 EAL: Trying to obtain current memory policy. 00:05:40.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.175 EAL: Restoring previous memory policy: 4 00:05:40.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.175 EAL: request: mp_malloc_sync 00:05:40.175 EAL: No shared files mode enabled, IPC is disabled 00:05:40.175 EAL: Heap on socket 0 was expanded by 258MB 00:05:40.434 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.434 EAL: request: mp_malloc_sync 00:05:40.434 EAL: No shared files mode enabled, IPC is disabled 00:05:40.434 EAL: Heap on socket 0 was shrunk by 258MB 00:05:40.434 EAL: Trying to obtain current memory policy. 00:05:40.434 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.434 EAL: Restoring previous memory policy: 4 00:05:40.434 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.434 EAL: request: mp_malloc_sync 00:05:40.434 EAL: No shared files mode enabled, IPC is disabled 00:05:40.434 EAL: Heap on socket 0 was expanded by 514MB 00:05:40.693 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.693 EAL: request: mp_malloc_sync 00:05:40.693 EAL: No shared files mode enabled, IPC is disabled 00:05:40.693 EAL: Heap on socket 0 was shrunk by 514MB 00:05:40.693 EAL: Trying to obtain current memory policy. 00:05:40.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.952 EAL: Restoring previous memory policy: 4 00:05:40.952 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.952 EAL: request: mp_malloc_sync 00:05:40.952 EAL: No shared files mode enabled, IPC is disabled 00:05:40.952 EAL: Heap on socket 0 was expanded by 1026MB 00:05:41.211 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.484 passed 00:05:41.484 00:05:41.484 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.484 suites 1 1 n/a 0 0 00:05:41.484 tests 2 2 2 0 0 00:05:41.484 asserts 5162 5162 5162 0 n/a 00:05:41.484 00:05:41.484 Elapsed time = 1.320 seconds 00:05:41.484 EAL: request: mp_malloc_sync 00:05:41.484 EAL: No shared files mode enabled, IPC is disabled 00:05:41.484 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:41.484 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.484 EAL: request: mp_malloc_sync 00:05:41.484 EAL: No shared files mode enabled, IPC is disabled 00:05:41.484 EAL: Heap on socket 0 was shrunk by 2MB 00:05:41.484 EAL: No shared files mode enabled, IPC is disabled 00:05:41.484 EAL: No shared files mode enabled, IPC is disabled 00:05:41.484 EAL: No shared files mode enabled, IPC is disabled 00:05:41.484 00:05:41.484 real 0m1.516s 00:05:41.484 user 0m0.819s 00:05:41.484 sys 0m0.563s 00:05:41.484 06:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.484 06:23:33 -- common/autotest_common.sh@10 -- # set +x 00:05:41.484 ************************************ 00:05:41.484 END TEST env_vtophys 00:05:41.484 ************************************ 00:05:41.484 06:23:34 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:41.484 06:23:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.484 06:23:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.484 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.484 ************************************ 00:05:41.484 START TEST env_pci 00:05:41.484 ************************************ 00:05:41.484 06:23:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:41.484 00:05:41.484 00:05:41.484 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.484 http://cunit.sourceforge.net/ 00:05:41.484 00:05:41.484 00:05:41.484 Suite: pci 00:05:41.484 Test: pci_hook ...[2024-10-04 06:23:34.055561] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67227 has claimed it 00:05:41.484 passed 00:05:41.484 00:05:41.484 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.484 suites 1 1 n/a 0 0 00:05:41.484 tests 1 1 1 0 0 00:05:41.484 asserts 25 25 25 0 n/a 00:05:41.484 00:05:41.484 Elapsed time = 0.002 seconds 00:05:41.484 EAL: Cannot find device (10000:00:01.0) 00:05:41.484 EAL: Failed to attach device on primary process 00:05:41.484 00:05:41.484 real 0m0.018s 00:05:41.484 user 0m0.007s 00:05:41.484 sys 0m0.010s 00:05:41.484 06:23:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.484 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.484 ************************************ 00:05:41.484 END TEST env_pci 00:05:41.484 ************************************ 00:05:41.484 06:23:34 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:41.484 06:23:34 -- env/env.sh@15 -- # uname 00:05:41.484 06:23:34 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:41.484 06:23:34 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:41.484 06:23:34 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.484 06:23:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:41.484 06:23:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.484 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.484 ************************************ 00:05:41.484 START TEST env_dpdk_post_init 00:05:41.484 ************************************ 00:05:41.484 06:23:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.484 EAL: Detected CPU lcores: 10 00:05:41.484 EAL: Detected NUMA nodes: 1 00:05:41.484 EAL: Detected shared linkage of DPDK 00:05:41.484 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.772 EAL: Selected IOVA mode 'PA' 00:05:41.772 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.772 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:41.772 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:41.772 Starting DPDK initialization... 00:05:41.772 Starting SPDK post initialization... 00:05:41.772 SPDK NVMe probe 00:05:41.772 Attaching to 0000:00:06.0 00:05:41.772 Attaching to 0000:00:07.0 00:05:41.772 Attached to 0000:00:06.0 00:05:41.772 Attached to 0000:00:07.0 00:05:41.772 Cleaning up... 00:05:41.772 00:05:41.772 real 0m0.177s 00:05:41.772 user 0m0.046s 00:05:41.772 sys 0m0.032s 00:05:41.772 06:23:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.772 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.772 ************************************ 00:05:41.772 END TEST env_dpdk_post_init 00:05:41.772 ************************************ 00:05:41.772 06:23:34 -- env/env.sh@26 -- # uname 00:05:41.772 06:23:34 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.772 06:23:34 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.772 06:23:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.772 06:23:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.772 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.772 ************************************ 00:05:41.772 START TEST env_mem_callbacks 00:05:41.772 ************************************ 00:05:41.772 06:23:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.772 EAL: Detected CPU lcores: 10 00:05:41.772 EAL: Detected NUMA nodes: 1 00:05:41.772 EAL: Detected shared linkage of DPDK 00:05:41.772 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.772 EAL: Selected IOVA mode 'PA' 00:05:42.045 00:05:42.045 00:05:42.045 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.045 http://cunit.sourceforge.net/ 00:05:42.045 00:05:42.045 00:05:42.045 Suite: memory 00:05:42.045 Test: test ... 00:05:42.045 register 0x200000200000 2097152 00:05:42.045 malloc 3145728 00:05:42.045 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.045 register 0x200000400000 4194304 00:05:42.046 buf 0x200000500000 len 3145728 PASSED 00:05:42.046 malloc 64 00:05:42.046 buf 0x2000004fff40 len 64 PASSED 00:05:42.046 malloc 4194304 00:05:42.046 register 0x200000800000 6291456 00:05:42.046 buf 0x200000a00000 len 4194304 PASSED 00:05:42.046 free 0x200000500000 3145728 00:05:42.046 free 0x2000004fff40 64 00:05:42.046 unregister 0x200000400000 4194304 PASSED 00:05:42.046 free 0x200000a00000 4194304 00:05:42.046 unregister 0x200000800000 6291456 PASSED 00:05:42.046 malloc 8388608 00:05:42.046 register 0x200000400000 10485760 00:05:42.046 buf 0x200000600000 len 8388608 PASSED 00:05:42.046 free 0x200000600000 8388608 00:05:42.046 unregister 0x200000400000 10485760 PASSED 00:05:42.046 passed 00:05:42.046 00:05:42.046 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.046 suites 1 1 n/a 0 0 00:05:42.046 tests 1 1 1 0 0 00:05:42.046 asserts 15 15 15 0 n/a 00:05:42.046 00:05:42.046 Elapsed time = 0.009 seconds 00:05:42.046 00:05:42.046 real 0m0.138s 00:05:42.046 user 0m0.013s 00:05:42.046 sys 0m0.025s 00:05:42.046 06:23:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.046 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 ************************************ 00:05:42.046 END TEST env_mem_callbacks 00:05:42.046 ************************************ 00:05:42.046 00:05:42.046 real 0m2.453s 00:05:42.046 user 0m1.218s 00:05:42.046 sys 0m0.870s 00:05:42.046 06:23:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.046 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 ************************************ 00:05:42.046 END TEST env 00:05:42.046 ************************************ 00:05:42.046 06:23:34 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:42.046 06:23:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.046 06:23:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.046 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.046 ************************************ 00:05:42.046 START TEST rpc 00:05:42.046 ************************************ 00:05:42.046 06:23:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:42.046 * Looking for test storage... 00:05:42.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.046 06:23:34 -- rpc/rpc.sh@65 -- # spdk_pid=67335 00:05:42.046 06:23:34 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:42.049 06:23:34 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.049 06:23:34 -- rpc/rpc.sh@67 -- # waitforlisten 67335 00:05:42.049 06:23:34 -- common/autotest_common.sh@819 -- # '[' -z 67335 ']' 00:05:42.049 06:23:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.049 06:23:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.049 06:23:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.049 06:23:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.049 06:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.308 [2024-10-04 06:23:34.750105] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:05:42.308 [2024-10-04 06:23:34.750215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67335 ] 00:05:42.308 [2024-10-04 06:23:34.888570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.308 [2024-10-04 06:23:34.956622] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.308 [2024-10-04 06:23:34.956764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:42.308 [2024-10-04 06:23:34.956777] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67335' to capture a snapshot of events at runtime. 00:05:42.308 [2024-10-04 06:23:34.956787] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67335 for offline analysis/debug. 00:05:42.308 [2024-10-04 06:23:34.956844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.246 06:23:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.246 06:23:35 -- common/autotest_common.sh@852 -- # return 0 00:05:43.246 06:23:35 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.246 06:23:35 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.246 06:23:35 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:43.246 06:23:35 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:43.246 06:23:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.246 06:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.246 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.246 ************************************ 00:05:43.246 START TEST rpc_integrity 00:05:43.246 ************************************ 00:05:43.246 06:23:35 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:43.246 06:23:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.246 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.246 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.246 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.246 06:23:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.246 06:23:35 -- rpc/rpc.sh@13 -- # jq length 00:05:43.246 06:23:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.246 06:23:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.246 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.246 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.246 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.246 06:23:35 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:43.246 06:23:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.246 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.246 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.246 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.246 06:23:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.246 { 00:05:43.246 "aliases": [ 00:05:43.246 "26500a1b-de99-4c6c-a140-c626111d6792" 00:05:43.246 ], 00:05:43.246 "assigned_rate_limits": { 00:05:43.246 "r_mbytes_per_sec": 0, 00:05:43.246 "rw_ios_per_sec": 0, 00:05:43.246 "rw_mbytes_per_sec": 0, 00:05:43.246 "w_mbytes_per_sec": 0 00:05:43.246 }, 00:05:43.246 "block_size": 512, 00:05:43.246 "claimed": false, 00:05:43.246 "driver_specific": {}, 00:05:43.246 "memory_domains": [ 00:05:43.246 { 00:05:43.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.246 "dma_device_type": 2 00:05:43.246 } 00:05:43.246 ], 00:05:43.246 "name": "Malloc0", 00:05:43.246 "num_blocks": 16384, 00:05:43.246 "product_name": "Malloc disk", 00:05:43.246 "supported_io_types": { 00:05:43.246 "abort": true, 00:05:43.246 "compare": false, 00:05:43.246 "compare_and_write": false, 00:05:43.246 "flush": true, 00:05:43.246 "nvme_admin": false, 00:05:43.246 "nvme_io": false, 00:05:43.246 "read": true, 00:05:43.246 "reset": true, 00:05:43.246 "unmap": true, 00:05:43.246 "write": true, 00:05:43.246 "write_zeroes": true 00:05:43.246 }, 00:05:43.246 "uuid": "26500a1b-de99-4c6c-a140-c626111d6792", 00:05:43.246 "zoned": false 00:05:43.246 } 00:05:43.246 ]' 00:05:43.246 06:23:35 -- rpc/rpc.sh@17 -- # jq length 00:05:43.506 06:23:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.506 06:23:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:43.506 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.506 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 [2024-10-04 06:23:35.936496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:43.506 [2024-10-04 06:23:35.936535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.506 [2024-10-04 06:23:35.936552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15a3490 00:05:43.506 [2024-10-04 06:23:35.936561] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.506 [2024-10-04 06:23:35.937868] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.506 [2024-10-04 06:23:35.937899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.506 Passthru0 00:05:43.506 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.506 06:23:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.506 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.506 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.506 06:23:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.506 { 00:05:43.506 "aliases": [ 00:05:43.506 "26500a1b-de99-4c6c-a140-c626111d6792" 00:05:43.506 ], 00:05:43.506 "assigned_rate_limits": { 00:05:43.506 "r_mbytes_per_sec": 0, 00:05:43.506 "rw_ios_per_sec": 0, 00:05:43.506 "rw_mbytes_per_sec": 0, 00:05:43.506 "w_mbytes_per_sec": 0 00:05:43.506 }, 00:05:43.506 "block_size": 512, 00:05:43.506 "claim_type": "exclusive_write", 00:05:43.506 "claimed": true, 00:05:43.506 "driver_specific": {}, 00:05:43.506 "memory_domains": [ 00:05:43.506 { 00:05:43.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.506 "dma_device_type": 2 00:05:43.506 } 00:05:43.506 ], 00:05:43.506 "name": "Malloc0", 00:05:43.506 "num_blocks": 16384, 00:05:43.506 "product_name": "Malloc disk", 00:05:43.506 "supported_io_types": { 00:05:43.506 "abort": true, 00:05:43.506 "compare": false, 00:05:43.506 "compare_and_write": false, 00:05:43.506 "flush": true, 00:05:43.506 "nvme_admin": false, 00:05:43.506 "nvme_io": false, 00:05:43.506 "read": true, 00:05:43.506 "reset": true, 00:05:43.506 "unmap": true, 00:05:43.506 "write": true, 00:05:43.506 "write_zeroes": true 00:05:43.506 }, 00:05:43.506 "uuid": "26500a1b-de99-4c6c-a140-c626111d6792", 00:05:43.506 "zoned": false 00:05:43.506 }, 00:05:43.506 { 00:05:43.506 "aliases": [ 00:05:43.506 "4699b55e-ae27-561f-b513-b3cf3fe32979" 00:05:43.506 ], 00:05:43.506 "assigned_rate_limits": { 00:05:43.506 "r_mbytes_per_sec": 0, 00:05:43.506 "rw_ios_per_sec": 0, 00:05:43.506 "rw_mbytes_per_sec": 0, 00:05:43.506 "w_mbytes_per_sec": 0 00:05:43.506 }, 00:05:43.506 "block_size": 512, 00:05:43.506 "claimed": false, 00:05:43.506 "driver_specific": { 00:05:43.506 "passthru": { 00:05:43.506 "base_bdev_name": "Malloc0", 00:05:43.506 "name": "Passthru0" 00:05:43.506 } 00:05:43.506 }, 00:05:43.506 "memory_domains": [ 00:05:43.506 { 00:05:43.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.506 "dma_device_type": 2 00:05:43.506 } 00:05:43.506 ], 00:05:43.506 "name": "Passthru0", 00:05:43.506 "num_blocks": 16384, 00:05:43.506 "product_name": "passthru", 00:05:43.506 "supported_io_types": { 00:05:43.506 "abort": true, 00:05:43.506 "compare": false, 00:05:43.506 "compare_and_write": false, 00:05:43.506 "flush": true, 00:05:43.506 "nvme_admin": false, 00:05:43.506 "nvme_io": false, 00:05:43.506 "read": true, 00:05:43.506 "reset": true, 00:05:43.506 "unmap": true, 00:05:43.506 "write": true, 00:05:43.506 "write_zeroes": true 00:05:43.506 }, 00:05:43.506 "uuid": "4699b55e-ae27-561f-b513-b3cf3fe32979", 00:05:43.506 "zoned": false 00:05:43.506 } 00:05:43.506 ]' 00:05:43.506 06:23:35 -- rpc/rpc.sh@21 -- # jq length 00:05:43.506 06:23:36 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.506 06:23:36 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.506 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.506 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.506 06:23:36 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:43.506 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.506 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.506 06:23:36 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.506 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.506 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.506 06:23:36 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.506 06:23:36 -- rpc/rpc.sh@26 -- # jq length 00:05:43.506 06:23:36 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.506 00:05:43.506 real 0m0.324s 00:05:43.506 user 0m0.210s 00:05:43.506 sys 0m0.035s 00:05:43.506 06:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.506 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 ************************************ 00:05:43.506 END TEST rpc_integrity 00:05:43.506 ************************************ 00:05:43.506 06:23:36 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:43.506 06:23:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.506 06:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.506 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 ************************************ 00:05:43.506 START TEST rpc_plugins 00:05:43.506 ************************************ 00:05:43.506 06:23:36 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:43.506 06:23:36 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:43.506 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.506 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.506 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.506 06:23:36 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:43.506 06:23:36 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:43.506 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.506 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.765 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.765 06:23:36 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:43.765 { 00:05:43.765 "aliases": [ 00:05:43.765 "51de4223-2d20-4ba7-a300-b8888b9d87c1" 00:05:43.765 ], 00:05:43.765 "assigned_rate_limits": { 00:05:43.765 "r_mbytes_per_sec": 0, 00:05:43.765 "rw_ios_per_sec": 0, 00:05:43.765 "rw_mbytes_per_sec": 0, 00:05:43.765 "w_mbytes_per_sec": 0 00:05:43.765 }, 00:05:43.765 "block_size": 4096, 00:05:43.765 "claimed": false, 00:05:43.765 "driver_specific": {}, 00:05:43.765 "memory_domains": [ 00:05:43.765 { 00:05:43.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.765 "dma_device_type": 2 00:05:43.765 } 00:05:43.765 ], 00:05:43.765 "name": "Malloc1", 00:05:43.765 "num_blocks": 256, 00:05:43.765 "product_name": "Malloc disk", 00:05:43.765 "supported_io_types": { 00:05:43.765 "abort": true, 00:05:43.765 "compare": false, 00:05:43.765 "compare_and_write": false, 00:05:43.765 "flush": true, 00:05:43.765 "nvme_admin": false, 00:05:43.765 "nvme_io": false, 00:05:43.765 "read": true, 00:05:43.765 "reset": true, 00:05:43.765 "unmap": true, 00:05:43.765 "write": true, 00:05:43.765 "write_zeroes": true 00:05:43.765 }, 00:05:43.765 "uuid": "51de4223-2d20-4ba7-a300-b8888b9d87c1", 00:05:43.765 "zoned": false 00:05:43.765 } 00:05:43.765 ]' 00:05:43.765 06:23:36 -- rpc/rpc.sh@32 -- # jq length 00:05:43.765 06:23:36 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:43.765 06:23:36 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:43.765 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.765 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.765 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.765 06:23:36 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:43.765 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.765 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.765 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.765 06:23:36 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:43.765 06:23:36 -- rpc/rpc.sh@36 -- # jq length 00:05:43.765 06:23:36 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:43.765 00:05:43.765 real 0m0.154s 00:05:43.765 user 0m0.098s 00:05:43.765 sys 0m0.016s 00:05:43.765 06:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.765 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.765 ************************************ 00:05:43.765 END TEST rpc_plugins 00:05:43.765 ************************************ 00:05:43.765 06:23:36 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.765 06:23:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.765 06:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.765 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.765 ************************************ 00:05:43.765 START TEST rpc_trace_cmd_test 00:05:43.765 ************************************ 00:05:43.765 06:23:36 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:43.765 06:23:36 -- rpc/rpc.sh@40 -- # local info 00:05:43.765 06:23:36 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.765 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.765 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.765 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.765 06:23:36 -- rpc/rpc.sh@42 -- # info='{ 00:05:43.765 "bdev": { 00:05:43.765 "mask": "0x8", 00:05:43.765 "tpoint_mask": "0xffffffffffffffff" 00:05:43.765 }, 00:05:43.765 "bdev_nvme": { 00:05:43.765 "mask": "0x4000", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "blobfs": { 00:05:43.765 "mask": "0x80", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "dsa": { 00:05:43.765 "mask": "0x200", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "ftl": { 00:05:43.765 "mask": "0x40", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "iaa": { 00:05:43.765 "mask": "0x1000", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "iscsi_conn": { 00:05:43.765 "mask": "0x2", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "nvme_pcie": { 00:05:43.765 "mask": "0x800", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "nvme_tcp": { 00:05:43.765 "mask": "0x2000", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "nvmf_rdma": { 00:05:43.765 "mask": "0x10", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "nvmf_tcp": { 00:05:43.765 "mask": "0x20", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "scsi": { 00:05:43.765 "mask": "0x4", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "thread": { 00:05:43.765 "mask": "0x400", 00:05:43.765 "tpoint_mask": "0x0" 00:05:43.765 }, 00:05:43.765 "tpoint_group_mask": "0x8", 00:05:43.765 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67335" 00:05:43.765 }' 00:05:43.765 06:23:36 -- rpc/rpc.sh@43 -- # jq length 00:05:43.765 06:23:36 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:44.024 06:23:36 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:44.024 06:23:36 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:44.024 06:23:36 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:44.024 06:23:36 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:44.024 06:23:36 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:44.024 06:23:36 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:44.024 06:23:36 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:44.024 06:23:36 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:44.024 00:05:44.024 real 0m0.275s 00:05:44.024 user 0m0.238s 00:05:44.024 sys 0m0.025s 00:05:44.024 06:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.024 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.024 ************************************ 00:05:44.024 END TEST rpc_trace_cmd_test 00:05:44.024 ************************************ 00:05:44.024 06:23:36 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:44.024 06:23:36 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:44.024 06:23:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.024 06:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.024 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.024 ************************************ 00:05:44.024 START TEST go_rpc 00:05:44.024 ************************************ 00:05:44.024 06:23:36 -- common/autotest_common.sh@1104 -- # go_rpc 00:05:44.284 06:23:36 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:44.284 06:23:36 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:44.284 06:23:36 -- rpc/rpc.sh@52 -- # jq length 00:05:44.284 06:23:36 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:44.284 06:23:36 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.284 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.284 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.284 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.284 06:23:36 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:44.284 06:23:36 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:44.284 06:23:36 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["547d01bb-3cbc-435c-a737-834bb8057f32"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"547d01bb-3cbc-435c-a737-834bb8057f32","zoned":false}]' 00:05:44.284 06:23:36 -- rpc/rpc.sh@57 -- # jq length 00:05:44.284 06:23:36 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:44.284 06:23:36 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:44.284 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.284 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.284 06:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.284 06:23:36 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:44.284 06:23:36 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:44.284 06:23:36 -- rpc/rpc.sh@61 -- # jq length 00:05:44.284 06:23:36 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:44.284 00:05:44.284 real 0m0.239s 00:05:44.284 user 0m0.152s 00:05:44.284 sys 0m0.046s 00:05:44.284 06:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.284 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.284 ************************************ 00:05:44.284 END TEST go_rpc 00:05:44.284 ************************************ 00:05:44.543 06:23:36 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:44.543 06:23:36 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:44.543 06:23:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.543 06:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.543 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.543 ************************************ 00:05:44.543 START TEST rpc_daemon_integrity 00:05:44.543 ************************************ 00:05:44.543 06:23:36 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:44.543 06:23:36 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.543 06:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.543 06:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.543 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.543 06:23:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.543 06:23:37 -- rpc/rpc.sh@13 -- # jq length 00:05:44.543 06:23:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.544 06:23:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.544 06:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.544 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.544 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.544 06:23:37 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:44.544 06:23:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.544 06:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.544 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.544 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.544 06:23:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.544 { 00:05:44.544 "aliases": [ 00:05:44.544 "9581c5cb-9dff-4ea7-a3b0-a7178fb505e1" 00:05:44.544 ], 00:05:44.544 "assigned_rate_limits": { 00:05:44.544 "r_mbytes_per_sec": 0, 00:05:44.544 "rw_ios_per_sec": 0, 00:05:44.544 "rw_mbytes_per_sec": 0, 00:05:44.544 "w_mbytes_per_sec": 0 00:05:44.544 }, 00:05:44.544 "block_size": 512, 00:05:44.544 "claimed": false, 00:05:44.544 "driver_specific": {}, 00:05:44.544 "memory_domains": [ 00:05:44.544 { 00:05:44.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.544 "dma_device_type": 2 00:05:44.544 } 00:05:44.544 ], 00:05:44.544 "name": "Malloc3", 00:05:44.544 "num_blocks": 16384, 00:05:44.544 "product_name": "Malloc disk", 00:05:44.544 "supported_io_types": { 00:05:44.544 "abort": true, 00:05:44.544 "compare": false, 00:05:44.544 "compare_and_write": false, 00:05:44.544 "flush": true, 00:05:44.544 "nvme_admin": false, 00:05:44.544 "nvme_io": false, 00:05:44.544 "read": true, 00:05:44.544 "reset": true, 00:05:44.544 "unmap": true, 00:05:44.544 "write": true, 00:05:44.544 "write_zeroes": true 00:05:44.544 }, 00:05:44.544 "uuid": "9581c5cb-9dff-4ea7-a3b0-a7178fb505e1", 00:05:44.544 "zoned": false 00:05:44.544 } 00:05:44.544 ]' 00:05:44.544 06:23:37 -- rpc/rpc.sh@17 -- # jq length 00:05:44.544 06:23:37 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.544 06:23:37 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:44.544 06:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.544 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.544 [2024-10-04 06:23:37.157350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:44.544 [2024-10-04 06:23:37.157390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.544 [2024-10-04 06:23:37.157405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f61d0 00:05:44.544 [2024-10-04 06:23:37.157414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.544 [2024-10-04 06:23:37.158557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.544 [2024-10-04 06:23:37.158585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.544 Passthru0 00:05:44.544 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.544 06:23:37 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.544 06:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.544 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.544 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.544 06:23:37 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.544 { 00:05:44.544 "aliases": [ 00:05:44.544 "9581c5cb-9dff-4ea7-a3b0-a7178fb505e1" 00:05:44.544 ], 00:05:44.544 "assigned_rate_limits": { 00:05:44.544 "r_mbytes_per_sec": 0, 00:05:44.544 "rw_ios_per_sec": 0, 00:05:44.544 "rw_mbytes_per_sec": 0, 00:05:44.544 "w_mbytes_per_sec": 0 00:05:44.544 }, 00:05:44.544 "block_size": 512, 00:05:44.544 "claim_type": "exclusive_write", 00:05:44.544 "claimed": true, 00:05:44.544 "driver_specific": {}, 00:05:44.544 "memory_domains": [ 00:05:44.544 { 00:05:44.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.544 "dma_device_type": 2 00:05:44.544 } 00:05:44.544 ], 00:05:44.544 "name": "Malloc3", 00:05:44.544 "num_blocks": 16384, 00:05:44.544 "product_name": "Malloc disk", 00:05:44.544 "supported_io_types": { 00:05:44.544 "abort": true, 00:05:44.544 "compare": false, 00:05:44.544 "compare_and_write": false, 00:05:44.544 "flush": true, 00:05:44.544 "nvme_admin": false, 00:05:44.544 "nvme_io": false, 00:05:44.544 "read": true, 00:05:44.544 "reset": true, 00:05:44.544 "unmap": true, 00:05:44.544 "write": true, 00:05:44.544 "write_zeroes": true 00:05:44.544 }, 00:05:44.544 "uuid": "9581c5cb-9dff-4ea7-a3b0-a7178fb505e1", 00:05:44.544 "zoned": false 00:05:44.544 }, 00:05:44.544 { 00:05:44.544 "aliases": [ 00:05:44.544 "12da34c6-77ae-5b54-9b05-1e0c76aa2bbb" 00:05:44.544 ], 00:05:44.544 "assigned_rate_limits": { 00:05:44.544 "r_mbytes_per_sec": 0, 00:05:44.544 "rw_ios_per_sec": 0, 00:05:44.544 "rw_mbytes_per_sec": 0, 00:05:44.544 "w_mbytes_per_sec": 0 00:05:44.544 }, 00:05:44.544 "block_size": 512, 00:05:44.544 "claimed": false, 00:05:44.544 "driver_specific": { 00:05:44.544 "passthru": { 00:05:44.544 "base_bdev_name": "Malloc3", 00:05:44.544 "name": "Passthru0" 00:05:44.544 } 00:05:44.544 }, 00:05:44.544 "memory_domains": [ 00:05:44.544 { 00:05:44.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.544 "dma_device_type": 2 00:05:44.544 } 00:05:44.544 ], 00:05:44.544 "name": "Passthru0", 00:05:44.544 "num_blocks": 16384, 00:05:44.544 "product_name": "passthru", 00:05:44.544 "supported_io_types": { 00:05:44.544 "abort": true, 00:05:44.544 "compare": false, 00:05:44.544 "compare_and_write": false, 00:05:44.544 "flush": true, 00:05:44.544 "nvme_admin": false, 00:05:44.544 "nvme_io": false, 00:05:44.544 "read": true, 00:05:44.544 "reset": true, 00:05:44.544 "unmap": true, 00:05:44.544 "write": true, 00:05:44.544 "write_zeroes": true 00:05:44.544 }, 00:05:44.544 "uuid": "12da34c6-77ae-5b54-9b05-1e0c76aa2bbb", 00:05:44.544 "zoned": false 00:05:44.544 } 00:05:44.544 ]' 00:05:44.544 06:23:37 -- rpc/rpc.sh@21 -- # jq length 00:05:44.803 06:23:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.803 06:23:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.803 06:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.803 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.803 06:23:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:44.803 06:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.803 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.803 06:23:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.803 06:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.803 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 06:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.803 06:23:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.803 06:23:37 -- rpc/rpc.sh@26 -- # jq length 00:05:44.803 06:23:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.803 00:05:44.803 real 0m0.341s 00:05:44.803 user 0m0.212s 00:05:44.803 sys 0m0.051s 00:05:44.803 06:23:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.803 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:44.803 ************************************ 00:05:44.803 END TEST rpc_daemon_integrity 00:05:44.803 ************************************ 00:05:44.803 06:23:37 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:44.803 06:23:37 -- rpc/rpc.sh@84 -- # killprocess 67335 00:05:44.803 06:23:37 -- common/autotest_common.sh@926 -- # '[' -z 67335 ']' 00:05:44.803 06:23:37 -- common/autotest_common.sh@930 -- # kill -0 67335 00:05:44.803 06:23:37 -- common/autotest_common.sh@931 -- # uname 00:05:44.803 06:23:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.803 06:23:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67335 00:05:44.803 06:23:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.803 06:23:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.803 killing process with pid 67335 00:05:44.803 06:23:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67335' 00:05:44.803 06:23:37 -- common/autotest_common.sh@945 -- # kill 67335 00:05:44.803 06:23:37 -- common/autotest_common.sh@950 -- # wait 67335 00:05:45.372 00:05:45.372 real 0m3.172s 00:05:45.372 user 0m4.188s 00:05:45.372 sys 0m0.802s 00:05:45.372 06:23:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.372 ************************************ 00:05:45.372 END TEST rpc 00:05:45.372 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.372 ************************************ 00:05:45.372 06:23:37 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.372 06:23:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.372 06:23:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.372 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.372 ************************************ 00:05:45.372 START TEST rpc_client 00:05:45.372 ************************************ 00:05:45.372 06:23:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:45.372 * Looking for test storage... 00:05:45.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:45.372 06:23:37 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:45.372 OK 00:05:45.372 06:23:37 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.372 00:05:45.372 real 0m0.104s 00:05:45.372 user 0m0.055s 00:05:45.372 sys 0m0.055s 00:05:45.372 06:23:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.372 ************************************ 00:05:45.372 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.372 END TEST rpc_client 00:05:45.372 ************************************ 00:05:45.372 06:23:37 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.372 06:23:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.372 06:23:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.372 06:23:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.372 ************************************ 00:05:45.372 START TEST json_config 00:05:45.372 ************************************ 00:05:45.372 06:23:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:45.372 06:23:38 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:45.372 06:23:38 -- nvmf/common.sh@7 -- # uname -s 00:05:45.372 06:23:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.372 06:23:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.372 06:23:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.372 06:23:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.372 06:23:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.372 06:23:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.372 06:23:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.372 06:23:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.372 06:23:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.372 06:23:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.372 06:23:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:05:45.372 06:23:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:05:45.372 06:23:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.372 06:23:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.372 06:23:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.372 06:23:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.372 06:23:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.372 06:23:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.372 06:23:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.372 06:23:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.372 06:23:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.372 06:23:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.372 06:23:38 -- paths/export.sh@5 -- # export PATH 00:05:45.372 06:23:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.372 06:23:38 -- nvmf/common.sh@46 -- # : 0 00:05:45.372 06:23:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:45.372 06:23:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:45.372 06:23:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:45.372 06:23:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.372 06:23:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.372 06:23:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:45.372 06:23:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:45.372 06:23:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:45.372 06:23:38 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:45.372 06:23:38 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:45.372 06:23:38 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:45.372 06:23:38 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.372 06:23:38 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:45.372 06:23:38 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:45.372 06:23:38 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:45.372 06:23:38 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:45.372 06:23:38 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:45.372 06:23:38 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:45.372 06:23:38 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:45.372 06:23:38 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:45.372 06:23:38 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:45.372 06:23:38 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.372 INFO: JSON configuration test init 00:05:45.372 06:23:38 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:45.372 06:23:38 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:45.372 06:23:38 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:45.372 06:23:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.372 06:23:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.372 06:23:38 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:45.372 06:23:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.372 06:23:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.631 06:23:38 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.631 06:23:38 -- json_config/json_config.sh@98 -- # local app=target 00:05:45.631 06:23:38 -- json_config/json_config.sh@99 -- # shift 00:05:45.631 06:23:38 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:45.631 06:23:38 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:45.631 06:23:38 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:45.631 06:23:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.631 06:23:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.631 06:23:38 -- json_config/json_config.sh@111 -- # app_pid[$app]=67641 00:05:45.631 Waiting for target to run... 00:05:45.631 06:23:38 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:45.631 06:23:38 -- json_config/json_config.sh@114 -- # waitforlisten 67641 /var/tmp/spdk_tgt.sock 00:05:45.631 06:23:38 -- common/autotest_common.sh@819 -- # '[' -z 67641 ']' 00:05:45.631 06:23:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.631 06:23:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.631 06:23:38 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.631 06:23:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.631 06:23:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.631 06:23:38 -- common/autotest_common.sh@10 -- # set +x 00:05:45.631 [2024-10-04 06:23:38.123936] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:05:45.631 [2024-10-04 06:23:38.124052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67641 ] 00:05:46.199 [2024-10-04 06:23:38.601218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.199 [2024-10-04 06:23:38.686939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.199 [2024-10-04 06:23:38.687081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.458 06:23:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.458 06:23:39 -- common/autotest_common.sh@852 -- # return 0 00:05:46.458 00:05:46.458 06:23:39 -- json_config/json_config.sh@115 -- # echo '' 00:05:46.458 06:23:39 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:46.458 06:23:39 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:46.458 06:23:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.458 06:23:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.716 06:23:39 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:46.716 06:23:39 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:46.716 06:23:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.716 06:23:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.716 06:23:39 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:46.716 06:23:39 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:46.716 06:23:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.283 06:23:39 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:47.283 06:23:39 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:47.283 06:23:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.283 06:23:39 -- common/autotest_common.sh@10 -- # set +x 00:05:47.283 06:23:39 -- json_config/json_config.sh@48 -- # local ret=0 00:05:47.283 06:23:39 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:47.283 06:23:39 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:47.283 06:23:39 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:47.283 06:23:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:47.283 06:23:39 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:47.541 06:23:39 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:47.541 06:23:39 -- json_config/json_config.sh@51 -- # local get_types 00:05:47.541 06:23:39 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:47.541 06:23:39 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:47.541 06:23:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.541 06:23:39 -- common/autotest_common.sh@10 -- # set +x 00:05:47.541 06:23:40 -- json_config/json_config.sh@58 -- # return 0 00:05:47.541 06:23:40 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:47.541 06:23:40 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:47.541 06:23:40 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:47.541 06:23:40 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:47.541 06:23:40 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:47.541 06:23:40 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:47.541 06:23:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.541 06:23:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.541 06:23:40 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:47.541 06:23:40 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:47.541 06:23:40 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:47.541 06:23:40 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:47.541 06:23:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:47.541 MallocForNvmf0 00:05:47.801 06:23:40 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:47.801 06:23:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:48.060 MallocForNvmf1 00:05:48.060 06:23:40 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.060 06:23:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.060 [2024-10-04 06:23:40.731770] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.319 06:23:40 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.319 06:23:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.577 06:23:41 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:48.577 06:23:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:48.836 06:23:41 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:48.836 06:23:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:48.836 06:23:41 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:48.836 06:23:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.094 [2024-10-04 06:23:41.680297] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.094 06:23:41 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:49.094 06:23:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.094 06:23:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 06:23:41 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:49.094 06:23:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.094 06:23:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.352 06:23:41 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:49.352 06:23:41 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.352 06:23:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.610 MallocBdevForConfigChangeCheck 00:05:49.610 06:23:42 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:49.610 06:23:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.610 06:23:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.610 06:23:42 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:49.610 06:23:42 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.869 INFO: shutting down applications... 00:05:49.869 06:23:42 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:49.869 06:23:42 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:49.869 06:23:42 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:49.869 06:23:42 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:49.869 06:23:42 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:50.128 Calling clear_iscsi_subsystem 00:05:50.128 Calling clear_nvmf_subsystem 00:05:50.128 Calling clear_nbd_subsystem 00:05:50.128 Calling clear_ublk_subsystem 00:05:50.128 Calling clear_vhost_blk_subsystem 00:05:50.128 Calling clear_vhost_scsi_subsystem 00:05:50.128 Calling clear_scheduler_subsystem 00:05:50.128 Calling clear_bdev_subsystem 00:05:50.128 Calling clear_accel_subsystem 00:05:50.128 Calling clear_vmd_subsystem 00:05:50.128 Calling clear_sock_subsystem 00:05:50.128 Calling clear_iobuf_subsystem 00:05:50.128 06:23:42 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:50.128 06:23:42 -- json_config/json_config.sh@396 -- # count=100 00:05:50.128 06:23:42 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:50.128 06:23:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.128 06:23:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.128 06:23:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:50.694 06:23:43 -- json_config/json_config.sh@398 -- # break 00:05:50.694 06:23:43 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:50.694 06:23:43 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:50.694 06:23:43 -- json_config/json_config.sh@120 -- # local app=target 00:05:50.694 06:23:43 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:50.694 06:23:43 -- json_config/json_config.sh@124 -- # [[ -n 67641 ]] 00:05:50.694 06:23:43 -- json_config/json_config.sh@127 -- # kill -SIGINT 67641 00:05:50.694 06:23:43 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:50.694 06:23:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:50.694 06:23:43 -- json_config/json_config.sh@130 -- # kill -0 67641 00:05:50.694 06:23:43 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:51.261 06:23:43 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:51.261 06:23:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:51.261 06:23:43 -- json_config/json_config.sh@130 -- # kill -0 67641 00:05:51.261 06:23:43 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:51.261 06:23:43 -- json_config/json_config.sh@132 -- # break 00:05:51.261 06:23:43 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:51.261 SPDK target shutdown done 00:05:51.261 INFO: relaunching applications... 00:05:51.261 06:23:43 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:51.261 06:23:43 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:51.261 06:23:43 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.261 06:23:43 -- json_config/json_config.sh@98 -- # local app=target 00:05:51.261 06:23:43 -- json_config/json_config.sh@99 -- # shift 00:05:51.261 06:23:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:51.261 06:23:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:51.261 06:23:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:51.261 06:23:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:51.261 06:23:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:51.261 06:23:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=67910 00:05:51.261 06:23:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:51.261 06:23:43 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.261 Waiting for target to run... 00:05:51.261 06:23:43 -- json_config/json_config.sh@114 -- # waitforlisten 67910 /var/tmp/spdk_tgt.sock 00:05:51.261 06:23:43 -- common/autotest_common.sh@819 -- # '[' -z 67910 ']' 00:05:51.261 06:23:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.261 06:23:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.261 06:23:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.261 06:23:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.261 06:23:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.261 [2024-10-04 06:23:43.690717] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:05:51.261 [2024-10-04 06:23:43.691116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67910 ] 00:05:51.520 [2024-10-04 06:23:44.119011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.520 [2024-10-04 06:23:44.197715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.520 [2024-10-04 06:23:44.197946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.086 [2024-10-04 06:23:44.503958] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.086 [2024-10-04 06:23:44.536062] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.086 00:05:52.086 INFO: Checking if target configuration is the same... 00:05:52.086 06:23:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.086 06:23:44 -- common/autotest_common.sh@852 -- # return 0 00:05:52.086 06:23:44 -- json_config/json_config.sh@115 -- # echo '' 00:05:52.086 06:23:44 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:52.086 06:23:44 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:52.086 06:23:44 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.086 06:23:44 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:52.086 06:23:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.086 + '[' 2 -ne 2 ']' 00:05:52.086 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:52.086 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:52.086 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:52.086 +++ basename /dev/fd/62 00:05:52.086 ++ mktemp /tmp/62.XXX 00:05:52.086 + tmp_file_1=/tmp/62.KgG 00:05:52.086 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.086 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.086 + tmp_file_2=/tmp/spdk_tgt_config.json.tYK 00:05:52.086 + ret=0 00:05:52.086 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:52.651 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:52.651 + diff -u /tmp/62.KgG /tmp/spdk_tgt_config.json.tYK 00:05:52.651 INFO: JSON config files are the same 00:05:52.651 + echo 'INFO: JSON config files are the same' 00:05:52.651 + rm /tmp/62.KgG /tmp/spdk_tgt_config.json.tYK 00:05:52.651 + exit 0 00:05:52.651 INFO: changing configuration and checking if this can be detected... 00:05:52.651 06:23:45 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:52.651 06:23:45 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:52.651 06:23:45 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:52.651 06:23:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:52.909 06:23:45 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.909 06:23:45 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:52.909 06:23:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.909 + '[' 2 -ne 2 ']' 00:05:52.909 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:52.909 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:52.909 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:52.909 +++ basename /dev/fd/62 00:05:52.909 ++ mktemp /tmp/62.XXX 00:05:52.909 + tmp_file_1=/tmp/62.5uF 00:05:52.909 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.909 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.909 + tmp_file_2=/tmp/spdk_tgt_config.json.qDh 00:05:52.909 + ret=0 00:05:52.909 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.166 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.166 + diff -u /tmp/62.5uF /tmp/spdk_tgt_config.json.qDh 00:05:53.166 + ret=1 00:05:53.166 + echo '=== Start of file: /tmp/62.5uF ===' 00:05:53.166 + cat /tmp/62.5uF 00:05:53.166 + echo '=== End of file: /tmp/62.5uF ===' 00:05:53.166 + echo '' 00:05:53.166 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qDh ===' 00:05:53.166 + cat /tmp/spdk_tgt_config.json.qDh 00:05:53.166 + echo '=== End of file: /tmp/spdk_tgt_config.json.qDh ===' 00:05:53.166 + echo '' 00:05:53.166 + rm /tmp/62.5uF /tmp/spdk_tgt_config.json.qDh 00:05:53.166 + exit 1 00:05:53.166 INFO: configuration change detected. 00:05:53.166 06:23:45 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:53.166 06:23:45 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:53.166 06:23:45 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:53.166 06:23:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:53.166 06:23:45 -- common/autotest_common.sh@10 -- # set +x 00:05:53.166 06:23:45 -- json_config/json_config.sh@360 -- # local ret=0 00:05:53.166 06:23:45 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:53.166 06:23:45 -- json_config/json_config.sh@370 -- # [[ -n 67910 ]] 00:05:53.166 06:23:45 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:53.166 06:23:45 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:53.166 06:23:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:53.166 06:23:45 -- common/autotest_common.sh@10 -- # set +x 00:05:53.166 06:23:45 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:53.425 06:23:45 -- json_config/json_config.sh@246 -- # uname -s 00:05:53.425 06:23:45 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:53.425 06:23:45 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:53.425 06:23:45 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:53.425 06:23:45 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:53.425 06:23:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.425 06:23:45 -- common/autotest_common.sh@10 -- # set +x 00:05:53.425 06:23:45 -- json_config/json_config.sh@376 -- # killprocess 67910 00:05:53.425 06:23:45 -- common/autotest_common.sh@926 -- # '[' -z 67910 ']' 00:05:53.425 06:23:45 -- common/autotest_common.sh@930 -- # kill -0 67910 00:05:53.425 06:23:45 -- common/autotest_common.sh@931 -- # uname 00:05:53.425 06:23:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:53.425 06:23:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67910 00:05:53.425 killing process with pid 67910 00:05:53.425 06:23:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:53.425 06:23:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:53.425 06:23:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67910' 00:05:53.425 06:23:45 -- common/autotest_common.sh@945 -- # kill 67910 00:05:53.425 06:23:45 -- common/autotest_common.sh@950 -- # wait 67910 00:05:53.683 06:23:46 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.683 06:23:46 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:53.683 06:23:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.683 06:23:46 -- common/autotest_common.sh@10 -- # set +x 00:05:53.683 INFO: Success 00:05:53.683 06:23:46 -- json_config/json_config.sh@381 -- # return 0 00:05:53.683 06:23:46 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:53.683 ************************************ 00:05:53.683 END TEST json_config 00:05:53.683 ************************************ 00:05:53.683 00:05:53.683 real 0m8.293s 00:05:53.683 user 0m11.729s 00:05:53.683 sys 0m1.937s 00:05:53.683 06:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.683 06:23:46 -- common/autotest_common.sh@10 -- # set +x 00:05:53.683 06:23:46 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:53.683 06:23:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.683 06:23:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.683 06:23:46 -- common/autotest_common.sh@10 -- # set +x 00:05:53.683 ************************************ 00:05:53.683 START TEST json_config_extra_key 00:05:53.683 ************************************ 00:05:53.683 06:23:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:53.942 06:23:46 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:53.942 06:23:46 -- nvmf/common.sh@7 -- # uname -s 00:05:53.942 06:23:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.942 06:23:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.942 06:23:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.942 06:23:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.942 06:23:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.942 06:23:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.942 06:23:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.942 06:23:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.942 06:23:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.942 06:23:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.942 06:23:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:05:53.942 06:23:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:05:53.942 06:23:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.942 06:23:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.942 06:23:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.942 06:23:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:53.942 06:23:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.942 06:23:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.942 06:23:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.942 06:23:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.943 06:23:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.943 06:23:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.943 06:23:46 -- paths/export.sh@5 -- # export PATH 00:05:53.943 06:23:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.943 06:23:46 -- nvmf/common.sh@46 -- # : 0 00:05:53.943 06:23:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:53.943 06:23:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:53.943 06:23:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:53.943 06:23:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.943 06:23:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.943 06:23:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:53.943 06:23:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:53.943 06:23:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:53.943 INFO: launching applications... 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68085 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:53.943 Waiting for target to run... 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:53.943 06:23:46 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68085 /var/tmp/spdk_tgt.sock 00:05:53.943 06:23:46 -- common/autotest_common.sh@819 -- # '[' -z 68085 ']' 00:05:53.943 06:23:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.943 06:23:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.943 06:23:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.943 06:23:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.943 06:23:46 -- common/autotest_common.sh@10 -- # set +x 00:05:53.943 [2024-10-04 06:23:46.454379] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:05:53.943 [2024-10-04 06:23:46.454655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68085 ] 00:05:54.201 [2024-10-04 06:23:46.877527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.460 [2024-10-04 06:23:46.932801] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.460 [2024-10-04 06:23:46.933308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.027 06:23:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.027 06:23:47 -- common/autotest_common.sh@852 -- # return 0 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:55.027 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:55.027 INFO: shutting down applications... 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68085 ]] 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68085 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68085 00:05:55.027 06:23:47 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68085 00:05:55.295 SPDK target shutdown done 00:05:55.295 Success 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:55.295 06:23:47 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:55.295 00:05:55.295 real 0m1.631s 00:05:55.295 user 0m1.495s 00:05:55.295 sys 0m0.437s 00:05:55.295 ************************************ 00:05:55.295 END TEST json_config_extra_key 00:05:55.295 ************************************ 00:05:55.295 06:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.295 06:23:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.571 06:23:47 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.571 06:23:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.571 06:23:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.571 06:23:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.571 ************************************ 00:05:55.571 START TEST alias_rpc 00:05:55.571 ************************************ 00:05:55.571 06:23:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.571 * Looking for test storage... 00:05:55.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:55.571 06:23:48 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.571 06:23:48 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68160 00:05:55.571 06:23:48 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68160 00:05:55.571 06:23:48 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.571 06:23:48 -- common/autotest_common.sh@819 -- # '[' -z 68160 ']' 00:05:55.571 06:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.571 06:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.571 06:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.571 06:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.571 06:23:48 -- common/autotest_common.sh@10 -- # set +x 00:05:55.571 [2024-10-04 06:23:48.134044] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:05:55.571 [2024-10-04 06:23:48.134334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68160 ] 00:05:55.830 [2024-10-04 06:23:48.261535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.830 [2024-10-04 06:23:48.326823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.830 [2024-10-04 06:23:48.327273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.398 06:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.398 06:23:49 -- common/autotest_common.sh@852 -- # return 0 00:05:56.398 06:23:49 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:56.657 06:23:49 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68160 00:05:56.657 06:23:49 -- common/autotest_common.sh@926 -- # '[' -z 68160 ']' 00:05:56.657 06:23:49 -- common/autotest_common.sh@930 -- # kill -0 68160 00:05:56.657 06:23:49 -- common/autotest_common.sh@931 -- # uname 00:05:56.657 06:23:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.657 06:23:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68160 00:05:56.916 killing process with pid 68160 00:05:56.916 06:23:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.916 06:23:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.916 06:23:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68160' 00:05:56.916 06:23:49 -- common/autotest_common.sh@945 -- # kill 68160 00:05:56.916 06:23:49 -- common/autotest_common.sh@950 -- # wait 68160 00:05:57.176 ************************************ 00:05:57.176 END TEST alias_rpc 00:05:57.176 ************************************ 00:05:57.176 00:05:57.176 real 0m1.673s 00:05:57.176 user 0m1.867s 00:05:57.176 sys 0m0.410s 00:05:57.176 06:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.176 06:23:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 06:23:49 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:05:57.176 06:23:49 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.176 06:23:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.176 06:23:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.176 06:23:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 ************************************ 00:05:57.176 START TEST dpdk_mem_utility 00:05:57.176 ************************************ 00:05:57.176 06:23:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.176 * Looking for test storage... 00:05:57.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:57.176 06:23:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.176 06:23:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68246 00:05:57.176 06:23:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68246 00:05:57.176 06:23:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.176 06:23:49 -- common/autotest_common.sh@819 -- # '[' -z 68246 ']' 00:05:57.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.176 06:23:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.176 06:23:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.176 06:23:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.176 06:23:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.176 06:23:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.434 [2024-10-04 06:23:49.865084] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:05:57.434 [2024-10-04 06:23:49.865185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68246 ] 00:05:57.434 [2024-10-04 06:23:50.001330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.434 [2024-10-04 06:23:50.060550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.434 [2024-10-04 06:23:50.060744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.370 06:23:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.370 06:23:50 -- common/autotest_common.sh@852 -- # return 0 00:05:58.370 06:23:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.370 06:23:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.370 06:23:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.370 06:23:50 -- common/autotest_common.sh@10 -- # set +x 00:05:58.370 { 00:05:58.370 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.370 } 00:05:58.370 06:23:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.370 06:23:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.370 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:58.370 1 heaps totaling size 814.000000 MiB 00:05:58.370 size: 814.000000 MiB heap id: 0 00:05:58.370 end heaps---------- 00:05:58.370 8 mempools totaling size 598.116089 MiB 00:05:58.370 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.370 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.370 size: 84.521057 MiB name: bdev_io_68246 00:05:58.370 size: 51.011292 MiB name: evtpool_68246 00:05:58.370 size: 50.003479 MiB name: msgpool_68246 00:05:58.370 size: 21.763794 MiB name: PDU_Pool 00:05:58.370 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.370 size: 0.026123 MiB name: Session_Pool 00:05:58.370 end mempools------- 00:05:58.370 6 memzones totaling size 4.142822 MiB 00:05:58.370 size: 1.000366 MiB name: RG_ring_0_68246 00:05:58.370 size: 1.000366 MiB name: RG_ring_1_68246 00:05:58.370 size: 1.000366 MiB name: RG_ring_4_68246 00:05:58.370 size: 1.000366 MiB name: RG_ring_5_68246 00:05:58.370 size: 0.125366 MiB name: RG_ring_2_68246 00:05:58.370 size: 0.015991 MiB name: RG_ring_3_68246 00:05:58.370 end memzones------- 00:05:58.370 06:23:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.370 heap id: 0 total size: 814.000000 MiB number of busy elements: 211 number of free elements: 15 00:05:58.370 list of free elements. size: 12.488220 MiB 00:05:58.370 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:58.370 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:58.370 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:58.370 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:58.370 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:58.370 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:58.370 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:58.370 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:58.370 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:58.370 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:05:58.370 element at address: 0x20000b200000 with size: 0.489807 MiB 00:05:58.370 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:58.370 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:58.370 element at address: 0x200027e00000 with size: 0.398865 MiB 00:05:58.370 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:58.370 list of standard malloc elements. size: 199.249207 MiB 00:05:58.370 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:58.370 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:58.370 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:58.370 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:58.370 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:58.370 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:58.370 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:58.370 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:58.370 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:58.370 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:58.370 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:58.370 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:58.370 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:58.370 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:58.370 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:58.370 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:58.371 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e66280 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6ce80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:58.371 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:58.371 list of memzone associated elements. size: 602.262573 MiB 00:05:58.371 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:58.371 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.371 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:58.371 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.371 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:58.371 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68246_0 00:05:58.371 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:58.372 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68246_0 00:05:58.372 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:58.372 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68246_0 00:05:58.372 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:58.372 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.372 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:58.372 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.372 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:58.372 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68246 00:05:58.372 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:58.372 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68246 00:05:58.372 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:58.372 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68246 00:05:58.372 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:58.372 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.372 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:58.372 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.372 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:58.372 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.372 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:58.372 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.372 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:58.372 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68246 00:05:58.372 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:58.372 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68246 00:05:58.372 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:58.372 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68246 00:05:58.372 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:58.372 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68246 00:05:58.372 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:58.372 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68246 00:05:58.372 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:58.372 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.372 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:58.372 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.372 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:58.372 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.372 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:58.372 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68246 00:05:58.372 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:58.372 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.372 element at address: 0x200027e66340 with size: 0.023743 MiB 00:05:58.372 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.372 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:58.372 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68246 00:05:58.372 element at address: 0x200027e6c480 with size: 0.002441 MiB 00:05:58.372 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.372 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:58.372 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68246 00:05:58.372 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:58.372 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68246 00:05:58.372 element at address: 0x200027e6cf40 with size: 0.000305 MiB 00:05:58.372 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.372 06:23:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.372 06:23:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68246 00:05:58.372 06:23:50 -- common/autotest_common.sh@926 -- # '[' -z 68246 ']' 00:05:58.372 06:23:50 -- common/autotest_common.sh@930 -- # kill -0 68246 00:05:58.372 06:23:50 -- common/autotest_common.sh@931 -- # uname 00:05:58.372 06:23:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:58.372 06:23:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68246 00:05:58.372 06:23:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:58.372 killing process with pid 68246 00:05:58.372 06:23:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:58.372 06:23:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68246' 00:05:58.372 06:23:50 -- common/autotest_common.sh@945 -- # kill 68246 00:05:58.372 06:23:50 -- common/autotest_common.sh@950 -- # wait 68246 00:05:58.937 00:05:58.937 real 0m1.582s 00:05:58.937 user 0m1.690s 00:05:58.937 sys 0m0.413s 00:05:58.937 06:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.937 06:23:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.937 ************************************ 00:05:58.937 END TEST dpdk_mem_utility 00:05:58.937 ************************************ 00:05:58.937 06:23:51 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.937 06:23:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.937 06:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.938 06:23:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.938 ************************************ 00:05:58.938 START TEST event 00:05:58.938 ************************************ 00:05:58.938 06:23:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.938 * Looking for test storage... 00:05:58.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.938 06:23:51 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:58.938 06:23:51 -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.938 06:23:51 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.938 06:23:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:58.938 06:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.938 06:23:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.938 ************************************ 00:05:58.938 START TEST event_perf 00:05:58.938 ************************************ 00:05:58.938 06:23:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.938 Running I/O for 1 seconds...[2024-10-04 06:23:51.469674] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:05:58.938 [2024-10-04 06:23:51.469754] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68340 ] 00:05:58.938 [2024-10-04 06:23:51.600333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.195 [2024-10-04 06:23:51.660380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.195 [2024-10-04 06:23:51.660530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.195 [2024-10-04 06:23:51.660648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.195 [2024-10-04 06:23:51.660648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.129 Running I/O for 1 seconds... 00:06:00.129 lcore 0: 131796 00:06:00.129 lcore 1: 131795 00:06:00.129 lcore 2: 131794 00:06:00.129 lcore 3: 131795 00:06:00.129 done. 00:06:00.129 00:06:00.129 real 0m1.261s 00:06:00.129 user 0m4.088s 00:06:00.129 sys 0m0.052s 00:06:00.129 06:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.129 06:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.129 ************************************ 00:06:00.129 END TEST event_perf 00:06:00.129 ************************************ 00:06:00.129 06:23:52 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.129 06:23:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:00.129 06:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.129 06:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.129 ************************************ 00:06:00.129 START TEST event_reactor 00:06:00.129 ************************************ 00:06:00.129 06:23:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.129 [2024-10-04 06:23:52.788549] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:00.129 [2024-10-04 06:23:52.788661] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68373 ] 00:06:00.388 [2024-10-04 06:23:52.924951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.388 [2024-10-04 06:23:52.984921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.776 test_start 00:06:01.776 oneshot 00:06:01.776 tick 100 00:06:01.776 tick 100 00:06:01.776 tick 250 00:06:01.776 tick 100 00:06:01.776 tick 100 00:06:01.776 tick 250 00:06:01.776 tick 500 00:06:01.776 tick 100 00:06:01.776 tick 100 00:06:01.776 tick 100 00:06:01.776 tick 250 00:06:01.776 tick 100 00:06:01.776 tick 100 00:06:01.776 test_end 00:06:01.776 ************************************ 00:06:01.776 END TEST event_reactor 00:06:01.776 ************************************ 00:06:01.776 00:06:01.776 real 0m1.270s 00:06:01.776 user 0m1.105s 00:06:01.776 sys 0m0.059s 00:06:01.776 06:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.777 06:23:54 -- common/autotest_common.sh@10 -- # set +x 00:06:01.777 06:23:54 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.777 06:23:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:01.777 06:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.777 06:23:54 -- common/autotest_common.sh@10 -- # set +x 00:06:01.777 ************************************ 00:06:01.777 START TEST event_reactor_perf 00:06:01.777 ************************************ 00:06:01.777 06:23:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.777 [2024-10-04 06:23:54.108723] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:01.777 [2024-10-04 06:23:54.108791] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68403 ] 00:06:01.777 [2024-10-04 06:23:54.236338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.777 [2024-10-04 06:23:54.293670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.719 test_start 00:06:02.719 test_end 00:06:02.719 Performance: 462439 events per second 00:06:02.719 00:06:02.719 real 0m1.251s 00:06:02.719 user 0m1.103s 00:06:02.719 sys 0m0.043s 00:06:02.720 06:23:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.720 06:23:55 -- common/autotest_common.sh@10 -- # set +x 00:06:02.720 ************************************ 00:06:02.720 END TEST event_reactor_perf 00:06:02.720 ************************************ 00:06:02.720 06:23:55 -- event/event.sh@49 -- # uname -s 00:06:02.720 06:23:55 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:02.720 06:23:55 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.720 06:23:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.720 06:23:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.720 06:23:55 -- common/autotest_common.sh@10 -- # set +x 00:06:02.720 ************************************ 00:06:02.720 START TEST event_scheduler 00:06:02.979 ************************************ 00:06:02.979 06:23:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:02.979 * Looking for test storage... 00:06:02.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:02.979 06:23:55 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:02.979 06:23:55 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68469 00:06:02.979 06:23:55 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.979 06:23:55 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:02.979 06:23:55 -- scheduler/scheduler.sh@37 -- # waitforlisten 68469 00:06:02.979 06:23:55 -- common/autotest_common.sh@819 -- # '[' -z 68469 ']' 00:06:02.979 06:23:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.979 06:23:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.979 06:23:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.979 06:23:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.979 06:23:55 -- common/autotest_common.sh@10 -- # set +x 00:06:02.979 [2024-10-04 06:23:55.534047] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:02.979 [2024-10-04 06:23:55.534373] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68469 ] 00:06:03.238 [2024-10-04 06:23:55.673761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.238 [2024-10-04 06:23:55.743659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.238 [2024-10-04 06:23:55.743789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.238 [2024-10-04 06:23:55.743956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.238 [2024-10-04 06:23:55.743960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.804 06:23:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.804 06:23:56 -- common/autotest_common.sh@852 -- # return 0 00:06:03.804 06:23:56 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:03.804 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.804 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:03.804 POWER: Env isn't set yet! 00:06:03.804 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:03.804 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.804 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.804 POWER: Attempting to initialise PSTAT power management... 00:06:03.804 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.804 POWER: Cannot set governor of lcore 0 to performance 00:06:03.804 POWER: Attempting to initialise CPPC power management... 00:06:03.804 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.804 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.804 POWER: Attempting to initialise VM power management... 00:06:03.804 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:03.805 POWER: Unable to set Power Management Environment for lcore 0 00:06:03.805 [2024-10-04 06:23:56.479886] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:03.805 [2024-10-04 06:23:56.479900] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:03.805 [2024-10-04 06:23:56.479909] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:03.805 [2024-10-04 06:23:56.479921] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:03.805 [2024-10-04 06:23:56.479930] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:03.805 [2024-10-04 06:23:56.479936] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:03.805 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.805 06:23:56 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:03.805 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.805 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 [2024-10-04 06:23:56.564590] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.064 06:23:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.064 06:23:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 ************************************ 00:06:04.064 START TEST scheduler_create_thread 00:06:04.064 ************************************ 00:06:04.064 06:23:56 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 2 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 3 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 4 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 5 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 6 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 7 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 8 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 9 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 10 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.064 06:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.064 06:23:56 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:04.064 06:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.064 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:06:05.972 06:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.972 06:23:58 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:05.972 06:23:58 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:05.972 06:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.972 06:23:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.539 ************************************ 00:06:06.539 END TEST scheduler_create_thread 00:06:06.539 ************************************ 00:06:06.539 06:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:06.539 00:06:06.539 real 0m2.613s 00:06:06.539 user 0m0.019s 00:06:06.539 sys 0m0.006s 00:06:06.539 06:23:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.539 06:23:59 -- common/autotest_common.sh@10 -- # set +x 00:06:06.798 06:23:59 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:06.798 06:23:59 -- scheduler/scheduler.sh@46 -- # killprocess 68469 00:06:06.798 06:23:59 -- common/autotest_common.sh@926 -- # '[' -z 68469 ']' 00:06:06.798 06:23:59 -- common/autotest_common.sh@930 -- # kill -0 68469 00:06:06.798 06:23:59 -- common/autotest_common.sh@931 -- # uname 00:06:06.798 06:23:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.798 06:23:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68469 00:06:06.798 killing process with pid 68469 00:06:06.798 06:23:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:06.798 06:23:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:06.798 06:23:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68469' 00:06:06.798 06:23:59 -- common/autotest_common.sh@945 -- # kill 68469 00:06:06.798 06:23:59 -- common/autotest_common.sh@950 -- # wait 68469 00:06:07.058 [2024-10-04 06:23:59.668470] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.317 00:06:07.317 real 0m4.485s 00:06:07.317 user 0m8.506s 00:06:07.317 sys 0m0.365s 00:06:07.317 06:23:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.317 06:23:59 -- common/autotest_common.sh@10 -- # set +x 00:06:07.317 ************************************ 00:06:07.317 END TEST event_scheduler 00:06:07.317 ************************************ 00:06:07.317 06:23:59 -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.317 06:23:59 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.317 06:23:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.317 06:23:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.317 06:23:59 -- common/autotest_common.sh@10 -- # set +x 00:06:07.317 ************************************ 00:06:07.317 START TEST app_repeat 00:06:07.317 ************************************ 00:06:07.317 06:23:59 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:07.317 06:23:59 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.317 06:23:59 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.317 06:23:59 -- event/event.sh@13 -- # local nbd_list 00:06:07.317 06:23:59 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.317 06:23:59 -- event/event.sh@14 -- # local bdev_list 00:06:07.317 06:23:59 -- event/event.sh@15 -- # local repeat_times=4 00:06:07.317 06:23:59 -- event/event.sh@17 -- # modprobe nbd 00:06:07.317 06:23:59 -- event/event.sh@19 -- # repeat_pid=68581 00:06:07.317 06:23:59 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.317 06:23:59 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.317 06:23:59 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68581' 00:06:07.317 Process app_repeat pid: 68581 00:06:07.317 spdk_app_start Round 0 00:06:07.317 06:23:59 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.317 06:23:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.317 06:23:59 -- event/event.sh@25 -- # waitforlisten 68581 /var/tmp/spdk-nbd.sock 00:06:07.317 06:23:59 -- common/autotest_common.sh@819 -- # '[' -z 68581 ']' 00:06:07.317 06:23:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.317 06:23:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.317 06:23:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.317 06:23:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.317 06:23:59 -- common/autotest_common.sh@10 -- # set +x 00:06:07.317 [2024-10-04 06:23:59.970476] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:07.317 [2024-10-04 06:23:59.970570] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68581 ] 00:06:07.576 [2024-10-04 06:24:00.103096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.576 [2024-10-04 06:24:00.164227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.576 [2024-10-04 06:24:00.164235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.513 06:24:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.513 06:24:00 -- common/autotest_common.sh@852 -- # return 0 00:06:08.513 06:24:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.772 Malloc0 00:06:08.772 06:24:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.030 Malloc1 00:06:09.030 06:24:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@12 -- # local i 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.030 06:24:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.289 /dev/nbd0 00:06:09.289 06:24:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.289 06:24:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.289 06:24:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:09.289 06:24:01 -- common/autotest_common.sh@857 -- # local i 00:06:09.289 06:24:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.289 06:24:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.289 06:24:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:09.289 06:24:01 -- common/autotest_common.sh@861 -- # break 00:06:09.289 06:24:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.289 06:24:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.289 06:24:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.289 1+0 records in 00:06:09.289 1+0 records out 00:06:09.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197883 s, 20.7 MB/s 00:06:09.289 06:24:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.289 06:24:01 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.289 06:24:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.289 06:24:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.289 06:24:01 -- common/autotest_common.sh@877 -- # return 0 00:06:09.289 06:24:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.289 06:24:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.289 06:24:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.548 /dev/nbd1 00:06:09.548 06:24:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.548 06:24:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.548 06:24:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:09.548 06:24:02 -- common/autotest_common.sh@857 -- # local i 00:06:09.548 06:24:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.548 06:24:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.548 06:24:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:09.548 06:24:02 -- common/autotest_common.sh@861 -- # break 00:06:09.548 06:24:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.548 06:24:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.548 06:24:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.548 1+0 records in 00:06:09.548 1+0 records out 00:06:09.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264391 s, 15.5 MB/s 00:06:09.549 06:24:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.549 06:24:02 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.549 06:24:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.549 06:24:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.549 06:24:02 -- common/autotest_common.sh@877 -- # return 0 00:06:09.549 06:24:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.549 06:24:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.549 06:24:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.549 06:24:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.549 06:24:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.116 { 00:06:10.116 "bdev_name": "Malloc0", 00:06:10.116 "nbd_device": "/dev/nbd0" 00:06:10.116 }, 00:06:10.116 { 00:06:10.116 "bdev_name": "Malloc1", 00:06:10.116 "nbd_device": "/dev/nbd1" 00:06:10.116 } 00:06:10.116 ]' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.116 { 00:06:10.116 "bdev_name": "Malloc0", 00:06:10.116 "nbd_device": "/dev/nbd0" 00:06:10.116 }, 00:06:10.116 { 00:06:10.116 "bdev_name": "Malloc1", 00:06:10.116 "nbd_device": "/dev/nbd1" 00:06:10.116 } 00:06:10.116 ]' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.116 /dev/nbd1' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.116 /dev/nbd1' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.116 256+0 records in 00:06:10.116 256+0 records out 00:06:10.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00727063 s, 144 MB/s 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.116 256+0 records in 00:06:10.116 256+0 records out 00:06:10.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245878 s, 42.6 MB/s 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.116 256+0 records in 00:06:10.116 256+0 records out 00:06:10.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029778 s, 35.2 MB/s 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@51 -- # local i 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.116 06:24:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@41 -- # break 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.374 06:24:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@41 -- # break 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.633 06:24:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@65 -- # true 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.892 06:24:03 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.892 06:24:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.150 06:24:03 -- event/event.sh@35 -- # sleep 3 00:06:11.409 [2024-10-04 06:24:03.973693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.409 [2024-10-04 06:24:04.029737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.409 [2024-10-04 06:24:04.029754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.409 [2024-10-04 06:24:04.084488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.409 [2024-10-04 06:24:04.084568] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.733 06:24:06 -- event/event.sh@23 -- # for i in {0..2} 00:06:14.733 spdk_app_start Round 1 00:06:14.733 06:24:06 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.733 06:24:06 -- event/event.sh@25 -- # waitforlisten 68581 /var/tmp/spdk-nbd.sock 00:06:14.733 06:24:06 -- common/autotest_common.sh@819 -- # '[' -z 68581 ']' 00:06:14.733 06:24:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.733 06:24:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.733 06:24:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.733 06:24:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.733 06:24:06 -- common/autotest_common.sh@10 -- # set +x 00:06:14.733 06:24:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.733 06:24:07 -- common/autotest_common.sh@852 -- # return 0 00:06:14.733 06:24:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.733 Malloc0 00:06:14.733 06:24:07 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.992 Malloc1 00:06:14.992 06:24:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.992 06:24:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.250 /dev/nbd0 00:06:15.250 06:24:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.250 06:24:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.250 06:24:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:15.250 06:24:07 -- common/autotest_common.sh@857 -- # local i 00:06:15.250 06:24:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:15.250 06:24:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:15.250 06:24:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:15.250 06:24:07 -- common/autotest_common.sh@861 -- # break 00:06:15.250 06:24:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:15.250 06:24:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:15.250 06:24:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.250 1+0 records in 00:06:15.250 1+0 records out 00:06:15.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318938 s, 12.8 MB/s 00:06:15.250 06:24:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.250 06:24:07 -- common/autotest_common.sh@874 -- # size=4096 00:06:15.250 06:24:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.250 06:24:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:15.250 06:24:07 -- common/autotest_common.sh@877 -- # return 0 00:06:15.250 06:24:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.250 06:24:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.250 06:24:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.508 /dev/nbd1 00:06:15.508 06:24:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.766 06:24:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.766 06:24:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:15.766 06:24:08 -- common/autotest_common.sh@857 -- # local i 00:06:15.766 06:24:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:15.766 06:24:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:15.766 06:24:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:15.766 06:24:08 -- common/autotest_common.sh@861 -- # break 00:06:15.766 06:24:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:15.766 06:24:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:15.766 06:24:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.766 1+0 records in 00:06:15.766 1+0 records out 00:06:15.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235492 s, 17.4 MB/s 00:06:15.766 06:24:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.766 06:24:08 -- common/autotest_common.sh@874 -- # size=4096 00:06:15.766 06:24:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.766 06:24:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:15.766 06:24:08 -- common/autotest_common.sh@877 -- # return 0 00:06:15.766 06:24:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.766 06:24:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.766 06:24:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.766 06:24:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.766 06:24:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.024 { 00:06:16.024 "bdev_name": "Malloc0", 00:06:16.024 "nbd_device": "/dev/nbd0" 00:06:16.024 }, 00:06:16.024 { 00:06:16.024 "bdev_name": "Malloc1", 00:06:16.024 "nbd_device": "/dev/nbd1" 00:06:16.024 } 00:06:16.024 ]' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.024 { 00:06:16.024 "bdev_name": "Malloc0", 00:06:16.024 "nbd_device": "/dev/nbd0" 00:06:16.024 }, 00:06:16.024 { 00:06:16.024 "bdev_name": "Malloc1", 00:06:16.024 "nbd_device": "/dev/nbd1" 00:06:16.024 } 00:06:16.024 ]' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.024 /dev/nbd1' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.024 /dev/nbd1' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.024 256+0 records in 00:06:16.024 256+0 records out 00:06:16.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105109 s, 99.8 MB/s 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.024 256+0 records in 00:06:16.024 256+0 records out 00:06:16.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259739 s, 40.4 MB/s 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.024 256+0 records in 00:06:16.024 256+0 records out 00:06:16.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287466 s, 36.5 MB/s 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.024 06:24:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@51 -- # local i 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.025 06:24:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@41 -- # break 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.591 06:24:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.591 06:24:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.591 06:24:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.591 06:24:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.591 06:24:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.591 06:24:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.591 06:24:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.849 06:24:09 -- bdev/nbd_common.sh@41 -- # break 00:06:16.849 06:24:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.849 06:24:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.849 06:24:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.849 06:24:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@65 -- # true 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.107 06:24:09 -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.107 06:24:09 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.365 06:24:09 -- event/event.sh@35 -- # sleep 3 00:06:17.623 [2024-10-04 06:24:10.097173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.623 [2024-10-04 06:24:10.166682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.623 [2024-10-04 06:24:10.166700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.623 [2024-10-04 06:24:10.222962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.623 [2024-10-04 06:24:10.223028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.909 06:24:12 -- event/event.sh@23 -- # for i in {0..2} 00:06:20.909 spdk_app_start Round 2 00:06:20.909 06:24:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.909 06:24:12 -- event/event.sh@25 -- # waitforlisten 68581 /var/tmp/spdk-nbd.sock 00:06:20.909 06:24:12 -- common/autotest_common.sh@819 -- # '[' -z 68581 ']' 00:06:20.909 06:24:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.909 06:24:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.909 06:24:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.909 06:24:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:20.909 06:24:12 -- common/autotest_common.sh@10 -- # set +x 00:06:20.909 06:24:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.909 06:24:13 -- common/autotest_common.sh@852 -- # return 0 00:06:20.909 06:24:13 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.909 Malloc0 00:06:20.909 06:24:13 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.168 Malloc1 00:06:21.168 06:24:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.168 06:24:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.427 /dev/nbd0 00:06:21.427 06:24:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.427 06:24:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.427 06:24:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:21.427 06:24:13 -- common/autotest_common.sh@857 -- # local i 00:06:21.427 06:24:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:21.427 06:24:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:21.427 06:24:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:21.427 06:24:13 -- common/autotest_common.sh@861 -- # break 00:06:21.427 06:24:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:21.427 06:24:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:21.427 06:24:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.427 1+0 records in 00:06:21.427 1+0 records out 00:06:21.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274469 s, 14.9 MB/s 00:06:21.427 06:24:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.427 06:24:13 -- common/autotest_common.sh@874 -- # size=4096 00:06:21.427 06:24:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.427 06:24:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:21.427 06:24:13 -- common/autotest_common.sh@877 -- # return 0 00:06:21.427 06:24:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.427 06:24:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.427 06:24:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.685 /dev/nbd1 00:06:21.685 06:24:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.685 06:24:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.685 06:24:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:21.685 06:24:14 -- common/autotest_common.sh@857 -- # local i 00:06:21.685 06:24:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:21.685 06:24:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:21.685 06:24:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:21.685 06:24:14 -- common/autotest_common.sh@861 -- # break 00:06:21.685 06:24:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:21.685 06:24:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:21.685 06:24:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.685 1+0 records in 00:06:21.685 1+0 records out 00:06:21.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338991 s, 12.1 MB/s 00:06:21.685 06:24:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.685 06:24:14 -- common/autotest_common.sh@874 -- # size=4096 00:06:21.685 06:24:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.685 06:24:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:21.685 06:24:14 -- common/autotest_common.sh@877 -- # return 0 00:06:21.685 06:24:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.685 06:24:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.685 06:24:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.685 06:24:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.685 06:24:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.943 { 00:06:21.943 "bdev_name": "Malloc0", 00:06:21.943 "nbd_device": "/dev/nbd0" 00:06:21.943 }, 00:06:21.943 { 00:06:21.943 "bdev_name": "Malloc1", 00:06:21.943 "nbd_device": "/dev/nbd1" 00:06:21.943 } 00:06:21.943 ]' 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.943 { 00:06:21.943 "bdev_name": "Malloc0", 00:06:21.943 "nbd_device": "/dev/nbd0" 00:06:21.943 }, 00:06:21.943 { 00:06:21.943 "bdev_name": "Malloc1", 00:06:21.943 "nbd_device": "/dev/nbd1" 00:06:21.943 } 00:06:21.943 ]' 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.943 /dev/nbd1' 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.943 /dev/nbd1' 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.943 256+0 records in 00:06:21.943 256+0 records out 00:06:21.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00695403 s, 151 MB/s 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.943 256+0 records in 00:06:21.943 256+0 records out 00:06:21.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025211 s, 41.6 MB/s 00:06:21.943 06:24:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.944 256+0 records in 00:06:21.944 256+0 records out 00:06:21.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295612 s, 35.5 MB/s 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@51 -- # local i 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.944 06:24:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@41 -- # break 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.202 06:24:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@41 -- # break 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.461 06:24:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@65 -- # true 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.719 06:24:15 -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.719 06:24:15 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.977 06:24:15 -- event/event.sh@35 -- # sleep 3 00:06:23.236 [2024-10-04 06:24:15.691049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.236 [2024-10-04 06:24:15.764751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.236 [2024-10-04 06:24:15.764765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.236 [2024-10-04 06:24:15.825123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.236 [2024-10-04 06:24:15.825181] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.520 06:24:18 -- event/event.sh@38 -- # waitforlisten 68581 /var/tmp/spdk-nbd.sock 00:06:26.520 06:24:18 -- common/autotest_common.sh@819 -- # '[' -z 68581 ']' 00:06:26.520 06:24:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.520 06:24:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.520 06:24:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.520 06:24:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.520 06:24:18 -- common/autotest_common.sh@10 -- # set +x 00:06:26.520 06:24:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.520 06:24:18 -- common/autotest_common.sh@852 -- # return 0 00:06:26.520 06:24:18 -- event/event.sh@39 -- # killprocess 68581 00:06:26.520 06:24:18 -- common/autotest_common.sh@926 -- # '[' -z 68581 ']' 00:06:26.520 06:24:18 -- common/autotest_common.sh@930 -- # kill -0 68581 00:06:26.520 06:24:18 -- common/autotest_common.sh@931 -- # uname 00:06:26.520 06:24:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.520 06:24:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68581 00:06:26.520 killing process with pid 68581 00:06:26.520 06:24:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.520 06:24:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.520 06:24:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68581' 00:06:26.520 06:24:18 -- common/autotest_common.sh@945 -- # kill 68581 00:06:26.520 06:24:18 -- common/autotest_common.sh@950 -- # wait 68581 00:06:26.520 spdk_app_start is called in Round 0. 00:06:26.520 Shutdown signal received, stop current app iteration 00:06:26.520 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 reinitialization... 00:06:26.520 spdk_app_start is called in Round 1. 00:06:26.520 Shutdown signal received, stop current app iteration 00:06:26.520 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 reinitialization... 00:06:26.520 spdk_app_start is called in Round 2. 00:06:26.520 Shutdown signal received, stop current app iteration 00:06:26.520 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 reinitialization... 00:06:26.520 spdk_app_start is called in Round 3. 00:06:26.520 Shutdown signal received, stop current app iteration 00:06:26.520 06:24:19 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.520 06:24:19 -- event/event.sh@42 -- # return 0 00:06:26.520 00:06:26.520 real 0m19.084s 00:06:26.520 user 0m42.986s 00:06:26.520 sys 0m2.920s 00:06:26.520 06:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.520 ************************************ 00:06:26.520 END TEST app_repeat 00:06:26.520 ************************************ 00:06:26.520 06:24:19 -- common/autotest_common.sh@10 -- # set +x 00:06:26.520 06:24:19 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.520 06:24:19 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.520 06:24:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.520 06:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.520 06:24:19 -- common/autotest_common.sh@10 -- # set +x 00:06:26.520 ************************************ 00:06:26.520 START TEST cpu_locks 00:06:26.520 ************************************ 00:06:26.520 06:24:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.520 * Looking for test storage... 00:06:26.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:26.520 06:24:19 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.520 06:24:19 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.520 06:24:19 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.520 06:24:19 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.520 06:24:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.520 06:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.520 06:24:19 -- common/autotest_common.sh@10 -- # set +x 00:06:26.520 ************************************ 00:06:26.520 START TEST default_locks 00:06:26.520 ************************************ 00:06:26.520 06:24:19 -- common/autotest_common.sh@1104 -- # default_locks 00:06:26.520 06:24:19 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69207 00:06:26.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.520 06:24:19 -- event/cpu_locks.sh@47 -- # waitforlisten 69207 00:06:26.520 06:24:19 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.520 06:24:19 -- common/autotest_common.sh@819 -- # '[' -z 69207 ']' 00:06:26.520 06:24:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.520 06:24:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.520 06:24:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.520 06:24:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.520 06:24:19 -- common/autotest_common.sh@10 -- # set +x 00:06:26.779 [2024-10-04 06:24:19.227657] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:26.779 [2024-10-04 06:24:19.228359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69207 ] 00:06:26.779 [2024-10-04 06:24:19.364917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.779 [2024-10-04 06:24:19.421246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.779 [2024-10-04 06:24:19.421409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.822 06:24:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.822 06:24:20 -- common/autotest_common.sh@852 -- # return 0 00:06:27.822 06:24:20 -- event/cpu_locks.sh@49 -- # locks_exist 69207 00:06:27.822 06:24:20 -- event/cpu_locks.sh@22 -- # lslocks -p 69207 00:06:27.822 06:24:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.080 06:24:20 -- event/cpu_locks.sh@50 -- # killprocess 69207 00:06:28.080 06:24:20 -- common/autotest_common.sh@926 -- # '[' -z 69207 ']' 00:06:28.080 06:24:20 -- common/autotest_common.sh@930 -- # kill -0 69207 00:06:28.080 06:24:20 -- common/autotest_common.sh@931 -- # uname 00:06:28.080 06:24:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.080 06:24:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69207 00:06:28.080 killing process with pid 69207 00:06:28.080 06:24:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.080 06:24:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.080 06:24:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69207' 00:06:28.080 06:24:20 -- common/autotest_common.sh@945 -- # kill 69207 00:06:28.080 06:24:20 -- common/autotest_common.sh@950 -- # wait 69207 00:06:28.338 06:24:20 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69207 00:06:28.338 06:24:20 -- common/autotest_common.sh@640 -- # local es=0 00:06:28.338 06:24:20 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69207 00:06:28.338 06:24:20 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:28.338 06:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.338 06:24:20 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.338 ERROR: process (pid: 69207) is no longer running 00:06:28.338 06:24:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.338 06:24:20 -- common/autotest_common.sh@643 -- # waitforlisten 69207 00:06:28.338 06:24:20 -- common/autotest_common.sh@819 -- # '[' -z 69207 ']' 00:06:28.338 06:24:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.338 06:24:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.338 06:24:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.338 06:24:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.338 06:24:20 -- common/autotest_common.sh@10 -- # set +x 00:06:28.338 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69207) - No such process 00:06:28.338 06:24:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.338 06:24:20 -- common/autotest_common.sh@852 -- # return 1 00:06:28.338 06:24:20 -- common/autotest_common.sh@643 -- # es=1 00:06:28.338 06:24:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:28.338 06:24:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:28.338 06:24:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:28.338 06:24:20 -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.338 ************************************ 00:06:28.338 END TEST default_locks 00:06:28.338 ************************************ 00:06:28.338 06:24:20 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.338 06:24:20 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.338 06:24:20 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.338 00:06:28.338 real 0m1.764s 00:06:28.338 user 0m1.906s 00:06:28.338 sys 0m0.514s 00:06:28.338 06:24:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.338 06:24:20 -- common/autotest_common.sh@10 -- # set +x 00:06:28.338 06:24:20 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:28.338 06:24:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.338 06:24:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.338 06:24:20 -- common/autotest_common.sh@10 -- # set +x 00:06:28.338 ************************************ 00:06:28.338 START TEST default_locks_via_rpc 00:06:28.338 ************************************ 00:06:28.338 06:24:20 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.338 06:24:20 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69271 00:06:28.338 06:24:20 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.338 06:24:20 -- event/cpu_locks.sh@63 -- # waitforlisten 69271 00:06:28.338 06:24:20 -- common/autotest_common.sh@819 -- # '[' -z 69271 ']' 00:06:28.338 06:24:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.338 06:24:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.338 06:24:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.338 06:24:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.338 06:24:20 -- common/autotest_common.sh@10 -- # set +x 00:06:28.597 [2024-10-04 06:24:21.039183] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:28.597 [2024-10-04 06:24:21.039424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69271 ] 00:06:28.597 [2024-10-04 06:24:21.168620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.597 [2024-10-04 06:24:21.255584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.597 [2024-10-04 06:24:21.256090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.537 06:24:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.537 06:24:21 -- common/autotest_common.sh@852 -- # return 0 00:06:29.537 06:24:21 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.537 06:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.537 06:24:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.537 06:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.537 06:24:21 -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.537 06:24:21 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.537 06:24:21 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.537 06:24:21 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.537 06:24:21 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.537 06:24:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:29.537 06:24:22 -- common/autotest_common.sh@10 -- # set +x 00:06:29.537 06:24:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:29.537 06:24:22 -- event/cpu_locks.sh@71 -- # locks_exist 69271 00:06:29.537 06:24:22 -- event/cpu_locks.sh@22 -- # lslocks -p 69271 00:06:29.537 06:24:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.795 06:24:22 -- event/cpu_locks.sh@73 -- # killprocess 69271 00:06:29.795 06:24:22 -- common/autotest_common.sh@926 -- # '[' -z 69271 ']' 00:06:29.795 06:24:22 -- common/autotest_common.sh@930 -- # kill -0 69271 00:06:29.795 06:24:22 -- common/autotest_common.sh@931 -- # uname 00:06:29.795 06:24:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.795 06:24:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69271 00:06:29.795 killing process with pid 69271 00:06:29.795 06:24:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.795 06:24:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.795 06:24:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69271' 00:06:29.795 06:24:22 -- common/autotest_common.sh@945 -- # kill 69271 00:06:29.795 06:24:22 -- common/autotest_common.sh@950 -- # wait 69271 00:06:30.363 00:06:30.363 real 0m1.903s 00:06:30.363 user 0m2.024s 00:06:30.363 sys 0m0.551s 00:06:30.363 06:24:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.363 ************************************ 00:06:30.363 END TEST default_locks_via_rpc 00:06:30.363 ************************************ 00:06:30.363 06:24:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.363 06:24:22 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.363 06:24:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.363 06:24:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.363 06:24:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.363 ************************************ 00:06:30.363 START TEST non_locking_app_on_locked_coremask 00:06:30.363 ************************************ 00:06:30.363 06:24:22 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:30.363 06:24:22 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69340 00:06:30.363 06:24:22 -- event/cpu_locks.sh@81 -- # waitforlisten 69340 /var/tmp/spdk.sock 00:06:30.363 06:24:22 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.363 06:24:22 -- common/autotest_common.sh@819 -- # '[' -z 69340 ']' 00:06:30.363 06:24:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.363 06:24:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.363 06:24:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.363 06:24:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.363 06:24:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.363 [2024-10-04 06:24:23.006315] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:30.363 [2024-10-04 06:24:23.007398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69340 ] 00:06:30.622 [2024-10-04 06:24:23.152970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.622 [2024-10-04 06:24:23.210374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.622 [2024-10-04 06:24:23.210572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.559 06:24:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.559 06:24:23 -- common/autotest_common.sh@852 -- # return 0 00:06:31.559 06:24:23 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69368 00:06:31.559 06:24:23 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.559 06:24:23 -- event/cpu_locks.sh@85 -- # waitforlisten 69368 /var/tmp/spdk2.sock 00:06:31.559 06:24:23 -- common/autotest_common.sh@819 -- # '[' -z 69368 ']' 00:06:31.559 06:24:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.559 06:24:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.559 06:24:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.559 06:24:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.559 06:24:23 -- common/autotest_common.sh@10 -- # set +x 00:06:31.559 [2024-10-04 06:24:24.044362] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:31.559 [2024-10-04 06:24:24.044646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69368 ] 00:06:31.559 [2024-10-04 06:24:24.184082] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.559 [2024-10-04 06:24:24.184147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.817 [2024-10-04 06:24:24.325571] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.817 [2024-10-04 06:24:24.325729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.752 06:24:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.752 06:24:25 -- common/autotest_common.sh@852 -- # return 0 00:06:32.752 06:24:25 -- event/cpu_locks.sh@87 -- # locks_exist 69340 00:06:32.752 06:24:25 -- event/cpu_locks.sh@22 -- # lslocks -p 69340 00:06:32.752 06:24:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.320 06:24:25 -- event/cpu_locks.sh@89 -- # killprocess 69340 00:06:33.320 06:24:25 -- common/autotest_common.sh@926 -- # '[' -z 69340 ']' 00:06:33.320 06:24:25 -- common/autotest_common.sh@930 -- # kill -0 69340 00:06:33.320 06:24:25 -- common/autotest_common.sh@931 -- # uname 00:06:33.320 06:24:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:33.320 06:24:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69340 00:06:33.320 killing process with pid 69340 00:06:33.320 06:24:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:33.320 06:24:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:33.320 06:24:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69340' 00:06:33.320 06:24:25 -- common/autotest_common.sh@945 -- # kill 69340 00:06:33.320 06:24:25 -- common/autotest_common.sh@950 -- # wait 69340 00:06:34.256 06:24:26 -- event/cpu_locks.sh@90 -- # killprocess 69368 00:06:34.256 06:24:26 -- common/autotest_common.sh@926 -- # '[' -z 69368 ']' 00:06:34.256 06:24:26 -- common/autotest_common.sh@930 -- # kill -0 69368 00:06:34.256 06:24:26 -- common/autotest_common.sh@931 -- # uname 00:06:34.256 06:24:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.256 06:24:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69368 00:06:34.256 killing process with pid 69368 00:06:34.256 06:24:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.256 06:24:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.256 06:24:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69368' 00:06:34.256 06:24:26 -- common/autotest_common.sh@945 -- # kill 69368 00:06:34.256 06:24:26 -- common/autotest_common.sh@950 -- # wait 69368 00:06:34.823 00:06:34.823 real 0m4.442s 00:06:34.823 user 0m4.803s 00:06:34.823 sys 0m1.270s 00:06:34.823 ************************************ 00:06:34.823 END TEST non_locking_app_on_locked_coremask 00:06:34.823 ************************************ 00:06:34.823 06:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.823 06:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:34.823 06:24:27 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:34.823 06:24:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.823 06:24:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.823 06:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:34.823 ************************************ 00:06:34.824 START TEST locking_app_on_unlocked_coremask 00:06:34.824 ************************************ 00:06:34.824 06:24:27 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:34.824 06:24:27 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69447 00:06:34.824 06:24:27 -- event/cpu_locks.sh@99 -- # waitforlisten 69447 /var/tmp/spdk.sock 00:06:34.824 06:24:27 -- common/autotest_common.sh@819 -- # '[' -z 69447 ']' 00:06:34.824 06:24:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.824 06:24:27 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:34.824 06:24:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.824 06:24:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.824 06:24:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.824 06:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:34.824 [2024-10-04 06:24:27.493553] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:34.824 [2024-10-04 06:24:27.493669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69447 ] 00:06:35.082 [2024-10-04 06:24:27.624226] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.082 [2024-10-04 06:24:27.624270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.082 [2024-10-04 06:24:27.686945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.082 [2024-10-04 06:24:27.687129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.018 06:24:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.018 06:24:28 -- common/autotest_common.sh@852 -- # return 0 00:06:36.018 06:24:28 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69475 00:06:36.018 06:24:28 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.018 06:24:28 -- event/cpu_locks.sh@103 -- # waitforlisten 69475 /var/tmp/spdk2.sock 00:06:36.018 06:24:28 -- common/autotest_common.sh@819 -- # '[' -z 69475 ']' 00:06:36.018 06:24:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.018 06:24:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.018 06:24:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.018 06:24:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.018 06:24:28 -- common/autotest_common.sh@10 -- # set +x 00:06:36.018 [2024-10-04 06:24:28.493178] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:36.018 [2024-10-04 06:24:28.493284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69475 ] 00:06:36.018 [2024-10-04 06:24:28.630973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.277 [2024-10-04 06:24:28.767262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.277 [2024-10-04 06:24:28.767465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.844 06:24:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.844 06:24:29 -- common/autotest_common.sh@852 -- # return 0 00:06:36.844 06:24:29 -- event/cpu_locks.sh@105 -- # locks_exist 69475 00:06:36.844 06:24:29 -- event/cpu_locks.sh@22 -- # lslocks -p 69475 00:06:36.844 06:24:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.779 06:24:30 -- event/cpu_locks.sh@107 -- # killprocess 69447 00:06:37.779 06:24:30 -- common/autotest_common.sh@926 -- # '[' -z 69447 ']' 00:06:37.779 06:24:30 -- common/autotest_common.sh@930 -- # kill -0 69447 00:06:37.779 06:24:30 -- common/autotest_common.sh@931 -- # uname 00:06:37.779 06:24:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.779 06:24:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69447 00:06:37.779 06:24:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.779 killing process with pid 69447 00:06:37.779 06:24:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.779 06:24:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69447' 00:06:37.779 06:24:30 -- common/autotest_common.sh@945 -- # kill 69447 00:06:37.779 06:24:30 -- common/autotest_common.sh@950 -- # wait 69447 00:06:38.712 06:24:31 -- event/cpu_locks.sh@108 -- # killprocess 69475 00:06:38.712 06:24:31 -- common/autotest_common.sh@926 -- # '[' -z 69475 ']' 00:06:38.712 06:24:31 -- common/autotest_common.sh@930 -- # kill -0 69475 00:06:38.712 06:24:31 -- common/autotest_common.sh@931 -- # uname 00:06:38.712 06:24:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.712 06:24:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69475 00:06:38.712 06:24:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.712 killing process with pid 69475 00:06:38.712 06:24:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.712 06:24:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69475' 00:06:38.713 06:24:31 -- common/autotest_common.sh@945 -- # kill 69475 00:06:38.713 06:24:31 -- common/autotest_common.sh@950 -- # wait 69475 00:06:39.279 00:06:39.279 real 0m4.345s 00:06:39.279 user 0m4.611s 00:06:39.279 sys 0m1.271s 00:06:39.279 06:24:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.279 06:24:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.279 ************************************ 00:06:39.279 END TEST locking_app_on_unlocked_coremask 00:06:39.279 ************************************ 00:06:39.279 06:24:31 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.279 06:24:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.279 06:24:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.279 06:24:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.279 ************************************ 00:06:39.279 START TEST locking_app_on_locked_coremask 00:06:39.279 ************************************ 00:06:39.279 06:24:31 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:39.279 06:24:31 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69559 00:06:39.279 06:24:31 -- event/cpu_locks.sh@116 -- # waitforlisten 69559 /var/tmp/spdk.sock 00:06:39.279 06:24:31 -- common/autotest_common.sh@819 -- # '[' -z 69559 ']' 00:06:39.279 06:24:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.279 06:24:31 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.279 06:24:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.279 06:24:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.279 06:24:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.279 06:24:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.279 [2024-10-04 06:24:31.888326] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:39.279 [2024-10-04 06:24:31.888421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69559 ] 00:06:39.537 [2024-10-04 06:24:32.017255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.537 [2024-10-04 06:24:32.077930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.537 [2024-10-04 06:24:32.078106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.471 06:24:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.471 06:24:32 -- common/autotest_common.sh@852 -- # return 0 00:06:40.471 06:24:32 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69587 00:06:40.471 06:24:32 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.471 06:24:32 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69587 /var/tmp/spdk2.sock 00:06:40.471 06:24:32 -- common/autotest_common.sh@640 -- # local es=0 00:06:40.471 06:24:32 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69587 /var/tmp/spdk2.sock 00:06:40.471 06:24:32 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:40.471 06:24:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.471 06:24:32 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:40.471 06:24:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.471 06:24:32 -- common/autotest_common.sh@643 -- # waitforlisten 69587 /var/tmp/spdk2.sock 00:06:40.471 06:24:32 -- common/autotest_common.sh@819 -- # '[' -z 69587 ']' 00:06:40.471 06:24:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.471 06:24:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.471 06:24:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.471 06:24:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.471 06:24:32 -- common/autotest_common.sh@10 -- # set +x 00:06:40.471 [2024-10-04 06:24:32.930912] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:40.471 [2024-10-04 06:24:32.931023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69587 ] 00:06:40.471 [2024-10-04 06:24:33.066065] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69559 has claimed it. 00:06:40.471 [2024-10-04 06:24:33.066136] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.037 ERROR: process (pid: 69587) is no longer running 00:06:41.037 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69587) - No such process 00:06:41.037 06:24:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.037 06:24:33 -- common/autotest_common.sh@852 -- # return 1 00:06:41.037 06:24:33 -- common/autotest_common.sh@643 -- # es=1 00:06:41.037 06:24:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.037 06:24:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.037 06:24:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.037 06:24:33 -- event/cpu_locks.sh@122 -- # locks_exist 69559 00:06:41.037 06:24:33 -- event/cpu_locks.sh@22 -- # lslocks -p 69559 00:06:41.037 06:24:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.625 06:24:34 -- event/cpu_locks.sh@124 -- # killprocess 69559 00:06:41.625 06:24:34 -- common/autotest_common.sh@926 -- # '[' -z 69559 ']' 00:06:41.625 06:24:34 -- common/autotest_common.sh@930 -- # kill -0 69559 00:06:41.625 06:24:34 -- common/autotest_common.sh@931 -- # uname 00:06:41.625 06:24:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.625 06:24:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69559 00:06:41.625 06:24:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.625 killing process with pid 69559 00:06:41.625 06:24:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.625 06:24:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69559' 00:06:41.625 06:24:34 -- common/autotest_common.sh@945 -- # kill 69559 00:06:41.625 06:24:34 -- common/autotest_common.sh@950 -- # wait 69559 00:06:42.191 00:06:42.191 real 0m2.783s 00:06:42.191 user 0m3.177s 00:06:42.191 sys 0m0.690s 00:06:42.191 06:24:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.191 06:24:34 -- common/autotest_common.sh@10 -- # set +x 00:06:42.191 ************************************ 00:06:42.191 END TEST locking_app_on_locked_coremask 00:06:42.191 ************************************ 00:06:42.191 06:24:34 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.191 06:24:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.191 06:24:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.191 06:24:34 -- common/autotest_common.sh@10 -- # set +x 00:06:42.191 ************************************ 00:06:42.191 START TEST locking_overlapped_coremask 00:06:42.191 ************************************ 00:06:42.191 06:24:34 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:42.191 06:24:34 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69639 00:06:42.191 06:24:34 -- event/cpu_locks.sh@133 -- # waitforlisten 69639 /var/tmp/spdk.sock 00:06:42.191 06:24:34 -- common/autotest_common.sh@819 -- # '[' -z 69639 ']' 00:06:42.191 06:24:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.191 06:24:34 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.191 06:24:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.191 06:24:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.191 06:24:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.191 06:24:34 -- common/autotest_common.sh@10 -- # set +x 00:06:42.191 [2024-10-04 06:24:34.744128] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:42.191 [2024-10-04 06:24:34.744251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69639 ] 00:06:42.450 [2024-10-04 06:24:34.878419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.450 [2024-10-04 06:24:34.945268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.450 [2024-10-04 06:24:34.945633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.450 [2024-10-04 06:24:34.945757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.450 [2024-10-04 06:24:34.945768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.385 06:24:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.385 06:24:35 -- common/autotest_common.sh@852 -- # return 0 00:06:43.385 06:24:35 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:43.385 06:24:35 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69669 00:06:43.385 06:24:35 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69669 /var/tmp/spdk2.sock 00:06:43.385 06:24:35 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.385 06:24:35 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69669 /var/tmp/spdk2.sock 00:06:43.385 06:24:35 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:43.385 06:24:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.385 06:24:35 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:43.385 06:24:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.385 06:24:35 -- common/autotest_common.sh@643 -- # waitforlisten 69669 /var/tmp/spdk2.sock 00:06:43.385 06:24:35 -- common/autotest_common.sh@819 -- # '[' -z 69669 ']' 00:06:43.385 06:24:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.385 06:24:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.385 06:24:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.385 06:24:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.385 06:24:35 -- common/autotest_common.sh@10 -- # set +x 00:06:43.385 [2024-10-04 06:24:35.795965] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:43.385 [2024-10-04 06:24:35.796554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69669 ] 00:06:43.385 [2024-10-04 06:24:35.930887] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69639 has claimed it. 00:06:43.385 [2024-10-04 06:24:35.930966] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.962 ERROR: process (pid: 69669) is no longer running 00:06:43.962 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69669) - No such process 00:06:43.962 06:24:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.962 06:24:36 -- common/autotest_common.sh@852 -- # return 1 00:06:43.962 06:24:36 -- common/autotest_common.sh@643 -- # es=1 00:06:43.962 06:24:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:43.962 06:24:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:43.962 06:24:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:43.962 06:24:36 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.962 06:24:36 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.962 06:24:36 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.962 06:24:36 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.962 06:24:36 -- event/cpu_locks.sh@141 -- # killprocess 69639 00:06:43.962 06:24:36 -- common/autotest_common.sh@926 -- # '[' -z 69639 ']' 00:06:43.962 06:24:36 -- common/autotest_common.sh@930 -- # kill -0 69639 00:06:43.962 06:24:36 -- common/autotest_common.sh@931 -- # uname 00:06:43.962 06:24:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:43.962 06:24:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69639 00:06:43.962 06:24:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:43.962 06:24:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:43.962 06:24:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69639' 00:06:43.962 killing process with pid 69639 00:06:43.962 06:24:36 -- common/autotest_common.sh@945 -- # kill 69639 00:06:43.962 06:24:36 -- common/autotest_common.sh@950 -- # wait 69639 00:06:44.537 00:06:44.537 real 0m2.451s 00:06:44.537 user 0m6.900s 00:06:44.537 sys 0m0.514s 00:06:44.537 06:24:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.537 ************************************ 00:06:44.537 END TEST locking_overlapped_coremask 00:06:44.537 ************************************ 00:06:44.537 06:24:37 -- common/autotest_common.sh@10 -- # set +x 00:06:44.537 06:24:37 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.537 06:24:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.537 06:24:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.537 06:24:37 -- common/autotest_common.sh@10 -- # set +x 00:06:44.537 ************************************ 00:06:44.537 START TEST locking_overlapped_coremask_via_rpc 00:06:44.537 ************************************ 00:06:44.537 06:24:37 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:44.537 06:24:37 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69726 00:06:44.537 06:24:37 -- event/cpu_locks.sh@149 -- # waitforlisten 69726 /var/tmp/spdk.sock 00:06:44.537 06:24:37 -- common/autotest_common.sh@819 -- # '[' -z 69726 ']' 00:06:44.537 06:24:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.537 06:24:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.537 06:24:37 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.537 06:24:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.537 06:24:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.537 06:24:37 -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 [2024-10-04 06:24:37.237630] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:44.795 [2024-10-04 06:24:37.237716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69726 ] 00:06:44.795 [2024-10-04 06:24:37.366181] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.795 [2024-10-04 06:24:37.366211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.795 [2024-10-04 06:24:37.430171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.795 [2024-10-04 06:24:37.430504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.795 [2024-10-04 06:24:37.430625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.795 [2024-10-04 06:24:37.430633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.729 06:24:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.729 06:24:38 -- common/autotest_common.sh@852 -- # return 0 00:06:45.729 06:24:38 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.729 06:24:38 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69756 00:06:45.729 06:24:38 -- event/cpu_locks.sh@153 -- # waitforlisten 69756 /var/tmp/spdk2.sock 00:06:45.729 06:24:38 -- common/autotest_common.sh@819 -- # '[' -z 69756 ']' 00:06:45.729 06:24:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.729 06:24:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.729 06:24:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.729 06:24:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.729 06:24:38 -- common/autotest_common.sh@10 -- # set +x 00:06:45.729 [2024-10-04 06:24:38.255880] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:45.729 [2024-10-04 06:24:38.255967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69756 ] 00:06:45.729 [2024-10-04 06:24:38.388579] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.729 [2024-10-04 06:24:38.388625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.988 [2024-10-04 06:24:38.540656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.988 [2024-10-04 06:24:38.541731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.988 [2024-10-04 06:24:38.544010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.988 [2024-10-04 06:24:38.544011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.924 06:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.924 06:24:39 -- common/autotest_common.sh@852 -- # return 0 00:06:46.924 06:24:39 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.924 06:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.924 06:24:39 -- common/autotest_common.sh@10 -- # set +x 00:06:46.924 06:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.924 06:24:39 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.924 06:24:39 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.924 06:24:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.924 06:24:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:46.924 06:24:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.924 06:24:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:46.924 06:24:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.924 06:24:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.924 06:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.924 06:24:39 -- common/autotest_common.sh@10 -- # set +x 00:06:46.924 [2024-10-04 06:24:39.292023] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69726 has claimed it. 00:06:46.924 2024/10/04 06:24:39 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:46.924 request: 00:06:46.924 { 00:06:46.924 "method": "framework_enable_cpumask_locks", 00:06:46.924 "params": {} 00:06:46.924 } 00:06:46.924 Got JSON-RPC error response 00:06:46.924 GoRPCClient: error on JSON-RPC call 00:06:46.924 06:24:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:46.924 06:24:39 -- common/autotest_common.sh@643 -- # es=1 00:06:46.924 06:24:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.924 06:24:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.924 06:24:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.924 06:24:39 -- event/cpu_locks.sh@158 -- # waitforlisten 69726 /var/tmp/spdk.sock 00:06:46.924 06:24:39 -- common/autotest_common.sh@819 -- # '[' -z 69726 ']' 00:06:46.924 06:24:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.924 06:24:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.924 06:24:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.924 06:24:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.924 06:24:39 -- common/autotest_common.sh@10 -- # set +x 00:06:46.924 06:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.924 06:24:39 -- common/autotest_common.sh@852 -- # return 0 00:06:46.924 06:24:39 -- event/cpu_locks.sh@159 -- # waitforlisten 69756 /var/tmp/spdk2.sock 00:06:46.924 06:24:39 -- common/autotest_common.sh@819 -- # '[' -z 69756 ']' 00:06:46.924 06:24:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.924 06:24:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.924 06:24:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.924 06:24:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.924 06:24:39 -- common/autotest_common.sh@10 -- # set +x 00:06:47.183 06:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.183 06:24:39 -- common/autotest_common.sh@852 -- # return 0 00:06:47.183 06:24:39 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.183 06:24:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.183 06:24:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.183 06:24:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.183 00:06:47.183 real 0m2.626s 00:06:47.183 user 0m1.348s 00:06:47.183 sys 0m0.212s 00:06:47.183 06:24:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.183 06:24:39 -- common/autotest_common.sh@10 -- # set +x 00:06:47.183 ************************************ 00:06:47.183 END TEST locking_overlapped_coremask_via_rpc 00:06:47.183 ************************************ 00:06:47.183 06:24:39 -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.183 06:24:39 -- event/cpu_locks.sh@15 -- # [[ -z 69726 ]] 00:06:47.183 06:24:39 -- event/cpu_locks.sh@15 -- # killprocess 69726 00:06:47.183 06:24:39 -- common/autotest_common.sh@926 -- # '[' -z 69726 ']' 00:06:47.183 06:24:39 -- common/autotest_common.sh@930 -- # kill -0 69726 00:06:47.183 06:24:39 -- common/autotest_common.sh@931 -- # uname 00:06:47.183 06:24:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.183 06:24:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69726 00:06:47.441 killing process with pid 69726 00:06:47.441 06:24:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:47.441 06:24:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:47.441 06:24:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69726' 00:06:47.441 06:24:39 -- common/autotest_common.sh@945 -- # kill 69726 00:06:47.441 06:24:39 -- common/autotest_common.sh@950 -- # wait 69726 00:06:48.009 06:24:40 -- event/cpu_locks.sh@16 -- # [[ -z 69756 ]] 00:06:48.009 06:24:40 -- event/cpu_locks.sh@16 -- # killprocess 69756 00:06:48.009 06:24:40 -- common/autotest_common.sh@926 -- # '[' -z 69756 ']' 00:06:48.009 06:24:40 -- common/autotest_common.sh@930 -- # kill -0 69756 00:06:48.009 06:24:40 -- common/autotest_common.sh@931 -- # uname 00:06:48.009 06:24:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:48.009 06:24:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69756 00:06:48.009 killing process with pid 69756 00:06:48.009 06:24:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:48.009 06:24:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:48.009 06:24:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69756' 00:06:48.009 06:24:40 -- common/autotest_common.sh@945 -- # kill 69756 00:06:48.009 06:24:40 -- common/autotest_common.sh@950 -- # wait 69756 00:06:48.269 06:24:40 -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.269 06:24:40 -- event/cpu_locks.sh@1 -- # cleanup 00:06:48.269 06:24:40 -- event/cpu_locks.sh@15 -- # [[ -z 69726 ]] 00:06:48.269 06:24:40 -- event/cpu_locks.sh@15 -- # killprocess 69726 00:06:48.269 06:24:40 -- common/autotest_common.sh@926 -- # '[' -z 69726 ']' 00:06:48.269 06:24:40 -- common/autotest_common.sh@930 -- # kill -0 69726 00:06:48.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69726) - No such process 00:06:48.269 06:24:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69726 is not found' 00:06:48.269 Process with pid 69726 is not found 00:06:48.269 06:24:40 -- event/cpu_locks.sh@16 -- # [[ -z 69756 ]] 00:06:48.269 06:24:40 -- event/cpu_locks.sh@16 -- # killprocess 69756 00:06:48.269 06:24:40 -- common/autotest_common.sh@926 -- # '[' -z 69756 ']' 00:06:48.269 06:24:40 -- common/autotest_common.sh@930 -- # kill -0 69756 00:06:48.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69756) - No such process 00:06:48.269 Process with pid 69756 is not found 00:06:48.269 06:24:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69756 is not found' 00:06:48.269 06:24:40 -- event/cpu_locks.sh@18 -- # rm -f 00:06:48.269 00:06:48.269 real 0m21.725s 00:06:48.269 user 0m37.986s 00:06:48.269 sys 0m5.973s 00:06:48.269 06:24:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.269 ************************************ 00:06:48.269 END TEST cpu_locks 00:06:48.269 ************************************ 00:06:48.269 06:24:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.269 00:06:48.269 real 0m49.482s 00:06:48.269 user 1m35.902s 00:06:48.269 sys 0m9.656s 00:06:48.269 06:24:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.269 06:24:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.269 ************************************ 00:06:48.269 END TEST event 00:06:48.269 ************************************ 00:06:48.269 06:24:40 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:48.269 06:24:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.269 06:24:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.269 06:24:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.269 ************************************ 00:06:48.269 START TEST thread 00:06:48.269 ************************************ 00:06:48.269 06:24:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:48.529 * Looking for test storage... 00:06:48.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:48.529 06:24:40 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.529 06:24:40 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:48.529 06:24:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.529 06:24:40 -- common/autotest_common.sh@10 -- # set +x 00:06:48.529 ************************************ 00:06:48.529 START TEST thread_poller_perf 00:06:48.529 ************************************ 00:06:48.529 06:24:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.529 [2024-10-04 06:24:40.996972] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:48.529 [2024-10-04 06:24:40.997058] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69896 ] 00:06:48.529 [2024-10-04 06:24:41.130865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.529 [2024-10-04 06:24:41.207979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.529 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:49.905 ====================================== 00:06:49.905 busy:2208202950 (cyc) 00:06:49.905 total_run_count: 352000 00:06:49.905 tsc_hz: 2200000000 (cyc) 00:06:49.905 ====================================== 00:06:49.905 poller_cost: 6273 (cyc), 2851 (nsec) 00:06:49.905 00:06:49.905 real 0m1.342s 00:06:49.905 user 0m1.161s 00:06:49.905 sys 0m0.067s 00:06:49.905 06:24:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.905 ************************************ 00:06:49.905 END TEST thread_poller_perf 00:06:49.905 ************************************ 00:06:49.905 06:24:42 -- common/autotest_common.sh@10 -- # set +x 00:06:49.905 06:24:42 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.905 06:24:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:49.905 06:24:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.905 06:24:42 -- common/autotest_common.sh@10 -- # set +x 00:06:49.905 ************************************ 00:06:49.905 START TEST thread_poller_perf 00:06:49.905 ************************************ 00:06:49.905 06:24:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.905 [2024-10-04 06:24:42.391767] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:49.905 [2024-10-04 06:24:42.391880] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69937 ] 00:06:49.905 [2024-10-04 06:24:42.526230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.163 [2024-10-04 06:24:42.588571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.163 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.097 ====================================== 00:06:51.097 busy:2202911872 (cyc) 00:06:51.097 total_run_count: 5103000 00:06:51.097 tsc_hz: 2200000000 (cyc) 00:06:51.097 ====================================== 00:06:51.097 poller_cost: 431 (cyc), 195 (nsec) 00:06:51.097 00:06:51.097 real 0m1.282s 00:06:51.097 user 0m1.120s 00:06:51.097 sys 0m0.055s 00:06:51.097 06:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.097 ************************************ 00:06:51.097 END TEST thread_poller_perf 00:06:51.097 ************************************ 00:06:51.097 06:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.097 06:24:43 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:51.097 00:06:51.097 real 0m2.810s 00:06:51.097 user 0m2.345s 00:06:51.097 sys 0m0.238s 00:06:51.097 06:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.097 06:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.097 ************************************ 00:06:51.097 END TEST thread 00:06:51.097 ************************************ 00:06:51.097 06:24:43 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:51.097 06:24:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:51.097 06:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.097 06:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.097 ************************************ 00:06:51.097 START TEST accel 00:06:51.097 ************************************ 00:06:51.097 06:24:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:51.355 * Looking for test storage... 00:06:51.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:51.355 06:24:43 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:51.355 06:24:43 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:51.355 06:24:43 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.355 06:24:43 -- accel/accel.sh@59 -- # spdk_tgt_pid=70007 00:06:51.355 06:24:43 -- accel/accel.sh@60 -- # waitforlisten 70007 00:06:51.355 06:24:43 -- common/autotest_common.sh@819 -- # '[' -z 70007 ']' 00:06:51.355 06:24:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.355 06:24:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.355 06:24:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.355 06:24:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.355 06:24:43 -- accel/accel.sh@58 -- # build_accel_config 00:06:51.355 06:24:43 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:51.355 06:24:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.355 06:24:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.355 06:24:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.355 06:24:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.355 06:24:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.355 06:24:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.355 06:24:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.355 06:24:43 -- accel/accel.sh@42 -- # jq -r . 00:06:51.355 [2024-10-04 06:24:43.906288] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:51.355 [2024-10-04 06:24:43.906906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70007 ] 00:06:51.619 [2024-10-04 06:24:44.042556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.619 [2024-10-04 06:24:44.110092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.619 [2024-10-04 06:24:44.110261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.553 06:24:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.553 06:24:44 -- common/autotest_common.sh@852 -- # return 0 00:06:52.553 06:24:44 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:52.553 06:24:44 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:52.553 06:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.553 06:24:44 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:52.553 06:24:44 -- common/autotest_common.sh@10 -- # set +x 00:06:52.553 06:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # IFS== 00:06:52.553 06:24:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.553 06:24:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.553 06:24:44 -- accel/accel.sh@67 -- # killprocess 70007 00:06:52.553 06:24:44 -- common/autotest_common.sh@926 -- # '[' -z 70007 ']' 00:06:52.553 06:24:44 -- common/autotest_common.sh@930 -- # kill -0 70007 00:06:52.553 06:24:44 -- common/autotest_common.sh@931 -- # uname 00:06:52.553 06:24:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.553 06:24:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70007 00:06:52.553 06:24:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.553 06:24:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.553 killing process with pid 70007 00:06:52.553 06:24:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70007' 00:06:52.553 06:24:44 -- common/autotest_common.sh@945 -- # kill 70007 00:06:52.553 06:24:44 -- common/autotest_common.sh@950 -- # wait 70007 00:06:52.811 06:24:45 -- accel/accel.sh@68 -- # trap - ERR 00:06:52.811 06:24:45 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:52.811 06:24:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:52.812 06:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.812 06:24:45 -- common/autotest_common.sh@10 -- # set +x 00:06:53.070 06:24:45 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:53.070 06:24:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:53.070 06:24:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.070 06:24:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.070 06:24:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.070 06:24:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.070 06:24:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.070 06:24:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.070 06:24:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.070 06:24:45 -- accel/accel.sh@42 -- # jq -r . 00:06:53.070 06:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.070 06:24:45 -- common/autotest_common.sh@10 -- # set +x 00:06:53.070 06:24:45 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:53.070 06:24:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:53.070 06:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.070 06:24:45 -- common/autotest_common.sh@10 -- # set +x 00:06:53.070 ************************************ 00:06:53.070 START TEST accel_missing_filename 00:06:53.070 ************************************ 00:06:53.070 06:24:45 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:53.070 06:24:45 -- common/autotest_common.sh@640 -- # local es=0 00:06:53.070 06:24:45 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:53.070 06:24:45 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:53.070 06:24:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.070 06:24:45 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:53.070 06:24:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.070 06:24:45 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:53.070 06:24:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:53.070 06:24:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.070 06:24:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.070 06:24:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.070 06:24:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.070 06:24:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.070 06:24:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.070 06:24:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.070 06:24:45 -- accel/accel.sh@42 -- # jq -r . 00:06:53.070 [2024-10-04 06:24:45.592566] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:53.070 [2024-10-04 06:24:45.592667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70082 ] 00:06:53.070 [2024-10-04 06:24:45.727038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.328 [2024-10-04 06:24:45.806090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.328 [2024-10-04 06:24:45.879361] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.328 [2024-10-04 06:24:45.984214] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:53.586 A filename is required. 00:06:53.586 06:24:46 -- common/autotest_common.sh@643 -- # es=234 00:06:53.586 06:24:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:53.586 06:24:46 -- common/autotest_common.sh@652 -- # es=106 00:06:53.586 06:24:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:53.586 06:24:46 -- common/autotest_common.sh@660 -- # es=1 00:06:53.586 06:24:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:53.586 00:06:53.586 real 0m0.522s 00:06:53.586 user 0m0.327s 00:06:53.586 sys 0m0.142s 00:06:53.586 06:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.586 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 ************************************ 00:06:53.586 END TEST accel_missing_filename 00:06:53.586 ************************************ 00:06:53.586 06:24:46 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.586 06:24:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:53.586 06:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.586 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:53.586 ************************************ 00:06:53.586 START TEST accel_compress_verify 00:06:53.586 ************************************ 00:06:53.586 06:24:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.586 06:24:46 -- common/autotest_common.sh@640 -- # local es=0 00:06:53.586 06:24:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.586 06:24:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:53.586 06:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.586 06:24:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:53.586 06:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.586 06:24:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.586 06:24:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:53.586 06:24:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.586 06:24:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.586 06:24:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.586 06:24:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.586 06:24:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.586 06:24:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.586 06:24:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.586 06:24:46 -- accel/accel.sh@42 -- # jq -r . 00:06:53.586 [2024-10-04 06:24:46.169448] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:53.586 [2024-10-04 06:24:46.169596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70101 ] 00:06:53.844 [2024-10-04 06:24:46.298029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.844 [2024-10-04 06:24:46.366783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.844 [2024-10-04 06:24:46.438643] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.117 [2024-10-04 06:24:46.540178] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:54.117 00:06:54.117 Compression does not support the verify option, aborting. 00:06:54.117 06:24:46 -- common/autotest_common.sh@643 -- # es=161 00:06:54.117 06:24:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.117 06:24:46 -- common/autotest_common.sh@652 -- # es=33 00:06:54.117 06:24:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:54.117 06:24:46 -- common/autotest_common.sh@660 -- # es=1 00:06:54.117 06:24:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.117 00:06:54.117 real 0m0.471s 00:06:54.117 user 0m0.282s 00:06:54.117 sys 0m0.132s 00:06:54.117 06:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.117 ************************************ 00:06:54.117 END TEST accel_compress_verify 00:06:54.117 ************************************ 00:06:54.117 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 06:24:46 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:54.117 06:24:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:54.117 06:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.117 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 ************************************ 00:06:54.117 START TEST accel_wrong_workload 00:06:54.117 ************************************ 00:06:54.117 06:24:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:54.117 06:24:46 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.117 06:24:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:54.117 06:24:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:54.117 06:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.117 06:24:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:54.117 06:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.117 06:24:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:54.117 06:24:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:54.117 06:24:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.117 06:24:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.117 06:24:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.117 06:24:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.117 06:24:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.117 06:24:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.117 06:24:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.117 06:24:46 -- accel/accel.sh@42 -- # jq -r . 00:06:54.117 Unsupported workload type: foobar 00:06:54.117 [2024-10-04 06:24:46.690552] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:54.117 accel_perf options: 00:06:54.117 [-h help message] 00:06:54.117 [-q queue depth per core] 00:06:54.117 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:54.117 [-T number of threads per core 00:06:54.117 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:54.117 [-t time in seconds] 00:06:54.117 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:54.117 [ dif_verify, , dif_generate, dif_generate_copy 00:06:54.117 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:54.117 [-l for compress/decompress workloads, name of uncompressed input file 00:06:54.117 [-S for crc32c workload, use this seed value (default 0) 00:06:54.117 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:54.117 [-f for fill workload, use this BYTE value (default 255) 00:06:54.117 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:54.117 [-y verify result if this switch is on] 00:06:54.117 [-a tasks to allocate per core (default: same value as -q)] 00:06:54.117 Can be used to spread operations across a wider range of memory. 00:06:54.117 06:24:46 -- common/autotest_common.sh@643 -- # es=1 00:06:54.117 06:24:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.117 06:24:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:54.117 06:24:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.117 00:06:54.117 real 0m0.028s 00:06:54.117 user 0m0.012s 00:06:54.117 sys 0m0.017s 00:06:54.117 06:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.117 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 ************************************ 00:06:54.117 END TEST accel_wrong_workload 00:06:54.117 ************************************ 00:06:54.117 06:24:46 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:54.117 06:24:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:54.117 06:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.117 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 ************************************ 00:06:54.117 START TEST accel_negative_buffers 00:06:54.117 ************************************ 00:06:54.117 06:24:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:54.117 06:24:46 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.117 06:24:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:54.117 06:24:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:54.117 06:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.117 06:24:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:54.117 06:24:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.117 06:24:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:54.117 06:24:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:54.117 06:24:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.117 06:24:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.117 06:24:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.117 06:24:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.117 06:24:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.117 06:24:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.117 06:24:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.117 06:24:46 -- accel/accel.sh@42 -- # jq -r . 00:06:54.117 -x option must be non-negative. 00:06:54.117 [2024-10-04 06:24:46.764684] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:54.117 accel_perf options: 00:06:54.117 [-h help message] 00:06:54.117 [-q queue depth per core] 00:06:54.117 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:54.117 [-T number of threads per core 00:06:54.117 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:54.117 [-t time in seconds] 00:06:54.117 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:54.117 [ dif_verify, , dif_generate, dif_generate_copy 00:06:54.117 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:54.117 [-l for compress/decompress workloads, name of uncompressed input file 00:06:54.117 [-S for crc32c workload, use this seed value (default 0) 00:06:54.117 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:54.117 [-f for fill workload, use this BYTE value (default 255) 00:06:54.117 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:54.117 [-y verify result if this switch is on] 00:06:54.117 [-a tasks to allocate per core (default: same value as -q)] 00:06:54.117 Can be used to spread operations across a wider range of memory. 00:06:54.117 06:24:46 -- common/autotest_common.sh@643 -- # es=1 00:06:54.117 06:24:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.117 06:24:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:54.117 06:24:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.117 00:06:54.117 real 0m0.028s 00:06:54.117 user 0m0.018s 00:06:54.117 sys 0m0.010s 00:06:54.117 06:24:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.117 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.117 ************************************ 00:06:54.117 END TEST accel_negative_buffers 00:06:54.117 ************************************ 00:06:54.376 06:24:46 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:54.376 06:24:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:54.376 06:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.376 06:24:46 -- common/autotest_common.sh@10 -- # set +x 00:06:54.376 ************************************ 00:06:54.376 START TEST accel_crc32c 00:06:54.376 ************************************ 00:06:54.376 06:24:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:54.376 06:24:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.376 06:24:46 -- accel/accel.sh@17 -- # local accel_module 00:06:54.376 06:24:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:54.376 06:24:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:54.376 06:24:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.376 06:24:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.376 06:24:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.376 06:24:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.376 06:24:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.376 06:24:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.376 06:24:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.376 06:24:46 -- accel/accel.sh@42 -- # jq -r . 00:06:54.376 [2024-10-04 06:24:46.841954] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:54.376 [2024-10-04 06:24:46.842048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70165 ] 00:06:54.376 [2024-10-04 06:24:46.977099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.376 [2024-10-04 06:24:47.045337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.751 06:24:48 -- accel/accel.sh@18 -- # out=' 00:06:55.751 SPDK Configuration: 00:06:55.751 Core mask: 0x1 00:06:55.751 00:06:55.751 Accel Perf Configuration: 00:06:55.751 Workload Type: crc32c 00:06:55.751 CRC-32C seed: 32 00:06:55.751 Transfer size: 4096 bytes 00:06:55.751 Vector count 1 00:06:55.751 Module: software 00:06:55.751 Queue depth: 32 00:06:55.751 Allocate depth: 32 00:06:55.751 # threads/core: 1 00:06:55.751 Run time: 1 seconds 00:06:55.751 Verify: Yes 00:06:55.751 00:06:55.751 Running for 1 seconds... 00:06:55.751 00:06:55.751 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.751 ------------------------------------------------------------------------------------ 00:06:55.751 0,0 566688/s 2213 MiB/s 0 0 00:06:55.751 ==================================================================================== 00:06:55.751 Total 566688/s 2213 MiB/s 0 0' 00:06:55.751 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:55.751 06:24:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:55.751 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:55.751 06:24:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:55.751 06:24:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.751 06:24:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.751 06:24:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.751 06:24:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.751 06:24:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.751 06:24:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.751 06:24:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.751 06:24:48 -- accel/accel.sh@42 -- # jq -r . 00:06:55.751 [2024-10-04 06:24:48.320648] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:55.752 [2024-10-04 06:24:48.320742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70179 ] 00:06:56.010 [2024-10-04 06:24:48.455751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.010 [2024-10-04 06:24:48.520067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val= 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val= 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val=0x1 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val= 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val= 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val=crc32c 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val=32 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val= 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.010 06:24:48 -- accel/accel.sh@21 -- # val=software 00:06:56.010 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.010 06:24:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.010 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.011 06:24:48 -- accel/accel.sh@21 -- # val=32 00:06:56.011 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.011 06:24:48 -- accel/accel.sh@21 -- # val=32 00:06:56.011 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.011 06:24:48 -- accel/accel.sh@21 -- # val=1 00:06:56.011 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.011 06:24:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.011 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.011 06:24:48 -- accel/accel.sh@21 -- # val=Yes 00:06:56.011 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.011 06:24:48 -- accel/accel.sh@21 -- # val= 00:06:56.011 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:56.011 06:24:48 -- accel/accel.sh@21 -- # val= 00:06:56.011 06:24:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # IFS=: 00:06:56.011 06:24:48 -- accel/accel.sh@20 -- # read -r var val 00:06:57.384 06:24:49 -- accel/accel.sh@21 -- # val= 00:06:57.384 06:24:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # IFS=: 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # read -r var val 00:06:57.384 06:24:49 -- accel/accel.sh@21 -- # val= 00:06:57.384 06:24:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # IFS=: 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # read -r var val 00:06:57.384 06:24:49 -- accel/accel.sh@21 -- # val= 00:06:57.384 06:24:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # IFS=: 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # read -r var val 00:06:57.384 06:24:49 -- accel/accel.sh@21 -- # val= 00:06:57.384 06:24:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # IFS=: 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # read -r var val 00:06:57.384 06:24:49 -- accel/accel.sh@21 -- # val= 00:06:57.384 06:24:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # IFS=: 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # read -r var val 00:06:57.384 06:24:49 -- accel/accel.sh@21 -- # val= 00:06:57.384 06:24:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # IFS=: 00:06:57.384 06:24:49 -- accel/accel.sh@20 -- # read -r var val 00:06:57.384 06:24:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.384 06:24:49 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:57.384 06:24:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.384 00:06:57.384 real 0m2.987s 00:06:57.384 user 0m2.515s 00:06:57.384 sys 0m0.270s 00:06:57.384 06:24:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.384 06:24:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.384 ************************************ 00:06:57.384 END TEST accel_crc32c 00:06:57.384 ************************************ 00:06:57.384 06:24:49 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:57.384 06:24:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:57.384 06:24:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.384 06:24:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.384 ************************************ 00:06:57.384 START TEST accel_crc32c_C2 00:06:57.384 ************************************ 00:06:57.384 06:24:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:57.384 06:24:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.384 06:24:49 -- accel/accel.sh@17 -- # local accel_module 00:06:57.384 06:24:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.384 06:24:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.384 06:24:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.384 06:24:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.384 06:24:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.384 06:24:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.384 06:24:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.384 06:24:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.384 06:24:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.384 06:24:49 -- accel/accel.sh@42 -- # jq -r . 00:06:57.384 [2024-10-04 06:24:49.886236] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:57.384 [2024-10-04 06:24:49.886313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70219 ] 00:06:57.384 [2024-10-04 06:24:50.016618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.642 [2024-10-04 06:24:50.108067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.062 06:24:51 -- accel/accel.sh@18 -- # out=' 00:06:59.062 SPDK Configuration: 00:06:59.062 Core mask: 0x1 00:06:59.062 00:06:59.062 Accel Perf Configuration: 00:06:59.062 Workload Type: crc32c 00:06:59.062 CRC-32C seed: 0 00:06:59.062 Transfer size: 4096 bytes 00:06:59.062 Vector count 2 00:06:59.062 Module: software 00:06:59.062 Queue depth: 32 00:06:59.062 Allocate depth: 32 00:06:59.062 # threads/core: 1 00:06:59.062 Run time: 1 seconds 00:06:59.062 Verify: Yes 00:06:59.062 00:06:59.062 Running for 1 seconds... 00:06:59.062 00:06:59.062 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.062 ------------------------------------------------------------------------------------ 00:06:59.062 0,0 437920/s 3421 MiB/s 0 0 00:06:59.062 ==================================================================================== 00:06:59.062 Total 437920/s 1710 MiB/s 0 0' 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:59.062 06:24:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.062 06:24:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.062 06:24:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.062 06:24:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.062 06:24:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.062 06:24:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.062 06:24:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.062 06:24:51 -- accel/accel.sh@42 -- # jq -r . 00:06:59.062 [2024-10-04 06:24:51.383011] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:06:59.062 [2024-10-04 06:24:51.383105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70233 ] 00:06:59.062 [2024-10-04 06:24:51.514268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.062 [2024-10-04 06:24:51.574682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val= 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val= 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=0x1 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val= 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val= 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=crc32c 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=0 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val= 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=software 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=32 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=32 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=1 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val=Yes 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val= 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:06:59.062 06:24:51 -- accel/accel.sh@21 -- # val= 00:06:59.062 06:24:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # IFS=: 00:06:59.062 06:24:51 -- accel/accel.sh@20 -- # read -r var val 00:07:00.435 06:24:52 -- accel/accel.sh@21 -- # val= 00:07:00.435 06:24:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # IFS=: 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # read -r var val 00:07:00.435 06:24:52 -- accel/accel.sh@21 -- # val= 00:07:00.435 06:24:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # IFS=: 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # read -r var val 00:07:00.435 06:24:52 -- accel/accel.sh@21 -- # val= 00:07:00.435 06:24:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # IFS=: 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # read -r var val 00:07:00.435 06:24:52 -- accel/accel.sh@21 -- # val= 00:07:00.435 06:24:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # IFS=: 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # read -r var val 00:07:00.435 06:24:52 -- accel/accel.sh@21 -- # val= 00:07:00.435 06:24:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # IFS=: 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # read -r var val 00:07:00.435 06:24:52 -- accel/accel.sh@21 -- # val= 00:07:00.435 06:24:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # IFS=: 00:07:00.435 06:24:52 -- accel/accel.sh@20 -- # read -r var val 00:07:00.435 06:24:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.435 06:24:52 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:00.435 06:24:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.435 00:07:00.435 real 0m2.993s 00:07:00.435 user 0m2.513s 00:07:00.435 sys 0m0.275s 00:07:00.435 06:24:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.435 06:24:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.435 ************************************ 00:07:00.435 END TEST accel_crc32c_C2 00:07:00.435 ************************************ 00:07:00.435 06:24:52 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:00.435 06:24:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:00.435 06:24:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.436 06:24:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.436 ************************************ 00:07:00.436 START TEST accel_copy 00:07:00.436 ************************************ 00:07:00.436 06:24:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:00.436 06:24:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.436 06:24:52 -- accel/accel.sh@17 -- # local accel_module 00:07:00.436 06:24:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:00.436 06:24:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:00.436 06:24:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.436 06:24:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.436 06:24:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.436 06:24:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.436 06:24:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.436 06:24:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.436 06:24:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.436 06:24:52 -- accel/accel.sh@42 -- # jq -r . 00:07:00.436 [2024-10-04 06:24:52.934143] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:00.436 [2024-10-04 06:24:52.934271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70273 ] 00:07:00.436 [2024-10-04 06:24:53.068324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.694 [2024-10-04 06:24:53.129272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.069 06:24:54 -- accel/accel.sh@18 -- # out=' 00:07:02.069 SPDK Configuration: 00:07:02.069 Core mask: 0x1 00:07:02.069 00:07:02.069 Accel Perf Configuration: 00:07:02.069 Workload Type: copy 00:07:02.069 Transfer size: 4096 bytes 00:07:02.069 Vector count 1 00:07:02.069 Module: software 00:07:02.069 Queue depth: 32 00:07:02.069 Allocate depth: 32 00:07:02.069 # threads/core: 1 00:07:02.069 Run time: 1 seconds 00:07:02.069 Verify: Yes 00:07:02.069 00:07:02.069 Running for 1 seconds... 00:07:02.069 00:07:02.069 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.069 ------------------------------------------------------------------------------------ 00:07:02.069 0,0 396800/s 1550 MiB/s 0 0 00:07:02.069 ==================================================================================== 00:07:02.069 Total 396800/s 1550 MiB/s 0 0' 00:07:02.069 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.069 06:24:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:02.069 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.069 06:24:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:02.069 06:24:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.069 06:24:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.069 06:24:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.069 06:24:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.069 06:24:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.069 06:24:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.069 06:24:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.069 06:24:54 -- accel/accel.sh@42 -- # jq -r . 00:07:02.070 [2024-10-04 06:24:54.437131] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:02.070 [2024-10-04 06:24:54.437240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70287 ] 00:07:02.070 [2024-10-04 06:24:54.572561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.070 [2024-10-04 06:24:54.630718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val= 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val= 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val=0x1 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val= 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val= 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val=copy 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val= 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val=software 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val=32 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val=32 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val=1 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val=Yes 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val= 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:02.070 06:24:54 -- accel/accel.sh@21 -- # val= 00:07:02.070 06:24:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # IFS=: 00:07:02.070 06:24:54 -- accel/accel.sh@20 -- # read -r var val 00:07:03.445 06:24:55 -- accel/accel.sh@21 -- # val= 00:07:03.445 06:24:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # IFS=: 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # read -r var val 00:07:03.445 06:24:55 -- accel/accel.sh@21 -- # val= 00:07:03.445 06:24:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # IFS=: 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # read -r var val 00:07:03.445 06:24:55 -- accel/accel.sh@21 -- # val= 00:07:03.445 06:24:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # IFS=: 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # read -r var val 00:07:03.445 06:24:55 -- accel/accel.sh@21 -- # val= 00:07:03.445 06:24:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # IFS=: 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # read -r var val 00:07:03.445 06:24:55 -- accel/accel.sh@21 -- # val= 00:07:03.445 06:24:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # IFS=: 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # read -r var val 00:07:03.445 06:24:55 -- accel/accel.sh@21 -- # val= 00:07:03.445 06:24:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # IFS=: 00:07:03.445 06:24:55 -- accel/accel.sh@20 -- # read -r var val 00:07:03.445 06:24:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.445 06:24:55 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:03.445 06:24:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.445 00:07:03.445 real 0m3.005s 00:07:03.445 user 0m2.536s 00:07:03.445 sys 0m0.264s 00:07:03.445 06:24:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.445 ************************************ 00:07:03.445 END TEST accel_copy 00:07:03.445 ************************************ 00:07:03.445 06:24:55 -- common/autotest_common.sh@10 -- # set +x 00:07:03.445 06:24:55 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.445 06:24:55 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:03.445 06:24:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.445 06:24:55 -- common/autotest_common.sh@10 -- # set +x 00:07:03.445 ************************************ 00:07:03.445 START TEST accel_fill 00:07:03.445 ************************************ 00:07:03.445 06:24:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.445 06:24:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.445 06:24:55 -- accel/accel.sh@17 -- # local accel_module 00:07:03.445 06:24:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.445 06:24:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.445 06:24:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.445 06:24:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.445 06:24:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.445 06:24:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.445 06:24:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.445 06:24:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.445 06:24:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.445 06:24:55 -- accel/accel.sh@42 -- # jq -r . 00:07:03.445 [2024-10-04 06:24:55.994195] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:03.445 [2024-10-04 06:24:55.994311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70329 ] 00:07:03.703 [2024-10-04 06:24:56.139080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.703 [2024-10-04 06:24:56.198677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.077 06:24:57 -- accel/accel.sh@18 -- # out=' 00:07:05.077 SPDK Configuration: 00:07:05.077 Core mask: 0x1 00:07:05.077 00:07:05.077 Accel Perf Configuration: 00:07:05.077 Workload Type: fill 00:07:05.077 Fill pattern: 0x80 00:07:05.077 Transfer size: 4096 bytes 00:07:05.077 Vector count 1 00:07:05.077 Module: software 00:07:05.077 Queue depth: 64 00:07:05.077 Allocate depth: 64 00:07:05.077 # threads/core: 1 00:07:05.077 Run time: 1 seconds 00:07:05.077 Verify: Yes 00:07:05.077 00:07:05.077 Running for 1 seconds... 00:07:05.077 00:07:05.077 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.077 ------------------------------------------------------------------------------------ 00:07:05.077 0,0 557248/s 2176 MiB/s 0 0 00:07:05.077 ==================================================================================== 00:07:05.077 Total 557248/s 2176 MiB/s 0 0' 00:07:05.077 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.077 06:24:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:05.077 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.077 06:24:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:05.077 06:24:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.077 06:24:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.077 06:24:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.077 06:24:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.077 06:24:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.077 06:24:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.077 06:24:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.077 06:24:57 -- accel/accel.sh@42 -- # jq -r . 00:07:05.077 [2024-10-04 06:24:57.507478] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:05.077 [2024-10-04 06:24:57.507587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70343 ] 00:07:05.077 [2024-10-04 06:24:57.644209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.077 [2024-10-04 06:24:57.708936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val= 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val= 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=0x1 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val= 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val= 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=fill 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=0x80 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val= 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=software 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=64 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=64 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=1 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val=Yes 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val= 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:05.336 06:24:57 -- accel/accel.sh@21 -- # val= 00:07:05.336 06:24:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # IFS=: 00:07:05.336 06:24:57 -- accel/accel.sh@20 -- # read -r var val 00:07:06.708 06:24:58 -- accel/accel.sh@21 -- # val= 00:07:06.708 06:24:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.708 06:24:58 -- accel/accel.sh@20 -- # IFS=: 00:07:06.708 06:24:58 -- accel/accel.sh@20 -- # read -r var val 00:07:06.708 06:24:58 -- accel/accel.sh@21 -- # val= 00:07:06.708 06:24:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.708 06:24:58 -- accel/accel.sh@20 -- # IFS=: 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # read -r var val 00:07:06.709 06:24:58 -- accel/accel.sh@21 -- # val= 00:07:06.709 06:24:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # IFS=: 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # read -r var val 00:07:06.709 06:24:58 -- accel/accel.sh@21 -- # val= 00:07:06.709 06:24:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # IFS=: 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # read -r var val 00:07:06.709 06:24:58 -- accel/accel.sh@21 -- # val= 00:07:06.709 06:24:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # IFS=: 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # read -r var val 00:07:06.709 06:24:58 -- accel/accel.sh@21 -- # val= 00:07:06.709 06:24:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # IFS=: 00:07:06.709 06:24:58 -- accel/accel.sh@20 -- # read -r var val 00:07:06.709 06:24:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.709 06:24:58 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:06.709 06:24:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.709 00:07:06.709 real 0m2.994s 00:07:06.709 user 0m2.524s 00:07:06.709 sys 0m0.269s 00:07:06.709 06:24:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.709 06:24:58 -- common/autotest_common.sh@10 -- # set +x 00:07:06.709 ************************************ 00:07:06.709 END TEST accel_fill 00:07:06.709 ************************************ 00:07:06.709 06:24:59 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:06.709 06:24:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.709 06:24:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.709 06:24:59 -- common/autotest_common.sh@10 -- # set +x 00:07:06.709 ************************************ 00:07:06.709 START TEST accel_copy_crc32c 00:07:06.709 ************************************ 00:07:06.709 06:24:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:06.709 06:24:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.709 06:24:59 -- accel/accel.sh@17 -- # local accel_module 00:07:06.709 06:24:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:06.709 06:24:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:06.709 06:24:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.709 06:24:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.709 06:24:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.709 06:24:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.709 06:24:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.709 06:24:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.709 06:24:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.709 06:24:59 -- accel/accel.sh@42 -- # jq -r . 00:07:06.709 [2024-10-04 06:24:59.039255] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:06.709 [2024-10-04 06:24:59.039378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70385 ] 00:07:06.709 [2024-10-04 06:24:59.166154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.709 [2024-10-04 06:24:59.230062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.083 06:25:00 -- accel/accel.sh@18 -- # out=' 00:07:08.083 SPDK Configuration: 00:07:08.083 Core mask: 0x1 00:07:08.083 00:07:08.083 Accel Perf Configuration: 00:07:08.083 Workload Type: copy_crc32c 00:07:08.083 CRC-32C seed: 0 00:07:08.083 Vector size: 4096 bytes 00:07:08.083 Transfer size: 4096 bytes 00:07:08.083 Vector count 1 00:07:08.083 Module: software 00:07:08.083 Queue depth: 32 00:07:08.083 Allocate depth: 32 00:07:08.083 # threads/core: 1 00:07:08.083 Run time: 1 seconds 00:07:08.083 Verify: Yes 00:07:08.083 00:07:08.083 Running for 1 seconds... 00:07:08.083 00:07:08.083 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.083 ------------------------------------------------------------------------------------ 00:07:08.083 0,0 311744/s 1217 MiB/s 0 0 00:07:08.083 ==================================================================================== 00:07:08.083 Total 311744/s 1217 MiB/s 0 0' 00:07:08.083 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.083 06:25:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:08.083 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.083 06:25:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:08.083 06:25:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.083 06:25:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.083 06:25:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.083 06:25:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.083 06:25:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.083 06:25:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.083 06:25:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.083 06:25:00 -- accel/accel.sh@42 -- # jq -r . 00:07:08.083 [2024-10-04 06:25:00.509716] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:08.083 [2024-10-04 06:25:00.509849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70399 ] 00:07:08.083 [2024-10-04 06:25:00.644946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.083 [2024-10-04 06:25:00.705527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val= 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val= 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=0x1 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val= 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val= 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=0 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val= 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=software 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=32 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=32 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=1 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val=Yes 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val= 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:08.342 06:25:00 -- accel/accel.sh@21 -- # val= 00:07:08.342 06:25:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # IFS=: 00:07:08.342 06:25:00 -- accel/accel.sh@20 -- # read -r var val 00:07:09.717 06:25:01 -- accel/accel.sh@21 -- # val= 00:07:09.717 06:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.717 06:25:01 -- accel/accel.sh@20 -- # IFS=: 00:07:09.717 06:25:01 -- accel/accel.sh@20 -- # read -r var val 00:07:09.717 06:25:01 -- accel/accel.sh@21 -- # val= 00:07:09.717 06:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.717 06:25:01 -- accel/accel.sh@20 -- # IFS=: 00:07:09.717 06:25:01 -- accel/accel.sh@20 -- # read -r var val 00:07:09.717 06:25:01 -- accel/accel.sh@21 -- # val= 00:07:09.717 06:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.717 06:25:01 -- accel/accel.sh@20 -- # IFS=: 00:07:09.717 06:25:01 -- accel/accel.sh@20 -- # read -r var val 00:07:09.717 06:25:01 -- accel/accel.sh@21 -- # val= 00:07:09.718 06:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.718 06:25:01 -- accel/accel.sh@20 -- # IFS=: 00:07:09.718 06:25:01 -- accel/accel.sh@20 -- # read -r var val 00:07:09.718 06:25:01 -- accel/accel.sh@21 -- # val= 00:07:09.718 06:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.718 06:25:01 -- accel/accel.sh@20 -- # IFS=: 00:07:09.718 06:25:01 -- accel/accel.sh@20 -- # read -r var val 00:07:09.718 06:25:01 -- accel/accel.sh@21 -- # val= 00:07:09.718 06:25:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.718 06:25:01 -- accel/accel.sh@20 -- # IFS=: 00:07:09.718 06:25:01 -- accel/accel.sh@20 -- # read -r var val 00:07:09.718 06:25:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.718 06:25:01 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:09.718 06:25:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.718 00:07:09.718 real 0m2.970s 00:07:09.718 user 0m2.508s 00:07:09.718 sys 0m0.264s 00:07:09.718 06:25:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.718 06:25:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.718 ************************************ 00:07:09.718 END TEST accel_copy_crc32c 00:07:09.718 ************************************ 00:07:09.718 06:25:02 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:09.718 06:25:02 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:09.718 06:25:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.718 06:25:02 -- common/autotest_common.sh@10 -- # set +x 00:07:09.718 ************************************ 00:07:09.718 START TEST accel_copy_crc32c_C2 00:07:09.718 ************************************ 00:07:09.718 06:25:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:09.718 06:25:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.718 06:25:02 -- accel/accel.sh@17 -- # local accel_module 00:07:09.718 06:25:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:09.718 06:25:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:09.718 06:25:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.718 06:25:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.718 06:25:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.718 06:25:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.718 06:25:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.718 06:25:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.718 06:25:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.718 06:25:02 -- accel/accel.sh@42 -- # jq -r . 00:07:09.718 [2024-10-04 06:25:02.065059] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:09.718 [2024-10-04 06:25:02.065169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70438 ] 00:07:09.718 [2024-10-04 06:25:02.200346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.718 [2024-10-04 06:25:02.262910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.092 06:25:03 -- accel/accel.sh@18 -- # out=' 00:07:11.092 SPDK Configuration: 00:07:11.092 Core mask: 0x1 00:07:11.092 00:07:11.092 Accel Perf Configuration: 00:07:11.092 Workload Type: copy_crc32c 00:07:11.092 CRC-32C seed: 0 00:07:11.092 Vector size: 4096 bytes 00:07:11.092 Transfer size: 8192 bytes 00:07:11.092 Vector count 2 00:07:11.092 Module: software 00:07:11.092 Queue depth: 32 00:07:11.092 Allocate depth: 32 00:07:11.092 # threads/core: 1 00:07:11.092 Run time: 1 seconds 00:07:11.092 Verify: Yes 00:07:11.092 00:07:11.092 Running for 1 seconds... 00:07:11.092 00:07:11.092 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.092 ------------------------------------------------------------------------------------ 00:07:11.092 0,0 222208/s 1736 MiB/s 0 0 00:07:11.092 ==================================================================================== 00:07:11.092 Total 222208/s 868 MiB/s 0 0' 00:07:11.092 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.092 06:25:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:11.092 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.093 06:25:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:11.093 06:25:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.093 06:25:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.093 06:25:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.093 06:25:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.093 06:25:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.093 06:25:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.093 06:25:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.093 06:25:03 -- accel/accel.sh@42 -- # jq -r . 00:07:11.093 [2024-10-04 06:25:03.536141] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:11.093 [2024-10-04 06:25:03.536253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70453 ] 00:07:11.093 [2024-10-04 06:25:03.671385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.093 [2024-10-04 06:25:03.730991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val= 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val= 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=0x1 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val= 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val= 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=0 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val= 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=software 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=32 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=32 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=1 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val=Yes 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val= 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:11.351 06:25:03 -- accel/accel.sh@21 -- # val= 00:07:11.351 06:25:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # IFS=: 00:07:11.351 06:25:03 -- accel/accel.sh@20 -- # read -r var val 00:07:12.725 06:25:05 -- accel/accel.sh@21 -- # val= 00:07:12.725 06:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.725 06:25:05 -- accel/accel.sh@20 -- # IFS=: 00:07:12.725 06:25:05 -- accel/accel.sh@20 -- # read -r var val 00:07:12.725 06:25:05 -- accel/accel.sh@21 -- # val= 00:07:12.726 06:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # IFS=: 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # read -r var val 00:07:12.726 06:25:05 -- accel/accel.sh@21 -- # val= 00:07:12.726 06:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # IFS=: 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # read -r var val 00:07:12.726 06:25:05 -- accel/accel.sh@21 -- # val= 00:07:12.726 06:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # IFS=: 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # read -r var val 00:07:12.726 06:25:05 -- accel/accel.sh@21 -- # val= 00:07:12.726 06:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # IFS=: 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # read -r var val 00:07:12.726 06:25:05 -- accel/accel.sh@21 -- # val= 00:07:12.726 ************************************ 00:07:12.726 END TEST accel_copy_crc32c_C2 00:07:12.726 ************************************ 00:07:12.726 06:25:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # IFS=: 00:07:12.726 06:25:05 -- accel/accel.sh@20 -- # read -r var val 00:07:12.726 06:25:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.726 06:25:05 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:12.726 06:25:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.726 00:07:12.726 real 0m2.973s 00:07:12.726 user 0m2.514s 00:07:12.726 sys 0m0.259s 00:07:12.726 06:25:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.726 06:25:05 -- common/autotest_common.sh@10 -- # set +x 00:07:12.726 06:25:05 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:12.726 06:25:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:12.726 06:25:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.726 06:25:05 -- common/autotest_common.sh@10 -- # set +x 00:07:12.726 ************************************ 00:07:12.726 START TEST accel_dualcast 00:07:12.726 ************************************ 00:07:12.726 06:25:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:12.726 06:25:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.726 06:25:05 -- accel/accel.sh@17 -- # local accel_module 00:07:12.726 06:25:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:12.726 06:25:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:12.726 06:25:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.726 06:25:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.726 06:25:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.726 06:25:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.726 06:25:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.726 06:25:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.726 06:25:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.726 06:25:05 -- accel/accel.sh@42 -- # jq -r . 00:07:12.726 [2024-10-04 06:25:05.090022] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:12.726 [2024-10-04 06:25:05.090259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70488 ] 00:07:12.726 [2024-10-04 06:25:05.227458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.726 [2024-10-04 06:25:05.294153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.108 06:25:06 -- accel/accel.sh@18 -- # out=' 00:07:14.108 SPDK Configuration: 00:07:14.108 Core mask: 0x1 00:07:14.108 00:07:14.108 Accel Perf Configuration: 00:07:14.108 Workload Type: dualcast 00:07:14.108 Transfer size: 4096 bytes 00:07:14.108 Vector count 1 00:07:14.108 Module: software 00:07:14.108 Queue depth: 32 00:07:14.108 Allocate depth: 32 00:07:14.108 # threads/core: 1 00:07:14.108 Run time: 1 seconds 00:07:14.108 Verify: Yes 00:07:14.108 00:07:14.108 Running for 1 seconds... 00:07:14.108 00:07:14.108 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.108 ------------------------------------------------------------------------------------ 00:07:14.108 0,0 421696/s 1647 MiB/s 0 0 00:07:14.108 ==================================================================================== 00:07:14.108 Total 421696/s 1647 MiB/s 0 0' 00:07:14.108 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.108 06:25:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:14.108 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.108 06:25:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:14.108 06:25:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.108 06:25:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.108 06:25:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.108 06:25:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.108 06:25:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.108 06:25:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.108 06:25:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.108 06:25:06 -- accel/accel.sh@42 -- # jq -r . 00:07:14.108 [2024-10-04 06:25:06.588557] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:14.108 [2024-10-04 06:25:06.588664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70508 ] 00:07:14.108 [2024-10-04 06:25:06.724098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.108 [2024-10-04 06:25:06.785850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val= 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val= 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val=0x1 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val= 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val= 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val=dualcast 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val= 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val=software 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val=32 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val=32 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val=1 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val=Yes 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val= 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:14.367 06:25:06 -- accel/accel.sh@21 -- # val= 00:07:14.367 06:25:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # IFS=: 00:07:14.367 06:25:06 -- accel/accel.sh@20 -- # read -r var val 00:07:15.743 06:25:08 -- accel/accel.sh@21 -- # val= 00:07:15.744 06:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.744 06:25:08 -- accel/accel.sh@21 -- # val= 00:07:15.744 06:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.744 06:25:08 -- accel/accel.sh@21 -- # val= 00:07:15.744 06:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.744 06:25:08 -- accel/accel.sh@21 -- # val= 00:07:15.744 06:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.744 06:25:08 -- accel/accel.sh@21 -- # val= 00:07:15.744 06:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.744 06:25:08 -- accel/accel.sh@21 -- # val= 00:07:15.744 06:25:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # IFS=: 00:07:15.744 06:25:08 -- accel/accel.sh@20 -- # read -r var val 00:07:15.744 06:25:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.744 06:25:08 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:15.744 06:25:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.744 00:07:15.744 real 0m2.999s 00:07:15.744 user 0m2.535s 00:07:15.744 sys 0m0.260s 00:07:15.744 06:25:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.744 06:25:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.744 ************************************ 00:07:15.744 END TEST accel_dualcast 00:07:15.744 ************************************ 00:07:15.744 06:25:08 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:15.744 06:25:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:15.744 06:25:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.744 06:25:08 -- common/autotest_common.sh@10 -- # set +x 00:07:15.744 ************************************ 00:07:15.744 START TEST accel_compare 00:07:15.744 ************************************ 00:07:15.744 06:25:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:15.744 06:25:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.744 06:25:08 -- accel/accel.sh@17 -- # local accel_module 00:07:15.744 06:25:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:15.744 06:25:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:15.744 06:25:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.744 06:25:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.744 06:25:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.744 06:25:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.744 06:25:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.744 06:25:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.744 06:25:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.744 06:25:08 -- accel/accel.sh@42 -- # jq -r . 00:07:15.744 [2024-10-04 06:25:08.148998] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:15.744 [2024-10-04 06:25:08.149094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70543 ] 00:07:15.744 [2024-10-04 06:25:08.284264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.744 [2024-10-04 06:25:08.349130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.123 06:25:09 -- accel/accel.sh@18 -- # out=' 00:07:17.123 SPDK Configuration: 00:07:17.123 Core mask: 0x1 00:07:17.123 00:07:17.123 Accel Perf Configuration: 00:07:17.123 Workload Type: compare 00:07:17.123 Transfer size: 4096 bytes 00:07:17.123 Vector count 1 00:07:17.123 Module: software 00:07:17.123 Queue depth: 32 00:07:17.123 Allocate depth: 32 00:07:17.123 # threads/core: 1 00:07:17.123 Run time: 1 seconds 00:07:17.123 Verify: Yes 00:07:17.123 00:07:17.123 Running for 1 seconds... 00:07:17.123 00:07:17.123 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.123 ------------------------------------------------------------------------------------ 00:07:17.123 0,0 568288/s 2219 MiB/s 0 0 00:07:17.123 ==================================================================================== 00:07:17.123 Total 568288/s 2219 MiB/s 0 0' 00:07:17.123 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.123 06:25:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:17.123 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.123 06:25:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.123 06:25:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.123 06:25:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.123 06:25:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.123 06:25:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.123 06:25:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.123 06:25:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.123 06:25:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.123 06:25:09 -- accel/accel.sh@42 -- # jq -r . 00:07:17.123 [2024-10-04 06:25:09.655622] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:17.123 [2024-10-04 06:25:09.656062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70562 ] 00:07:17.123 [2024-10-04 06:25:09.792457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.382 [2024-10-04 06:25:09.851679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val= 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val= 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val=0x1 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val= 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val= 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val=compare 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val= 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val=software 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val=32 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val=32 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val=1 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val=Yes 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val= 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:17.382 06:25:09 -- accel/accel.sh@21 -- # val= 00:07:17.382 06:25:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # IFS=: 00:07:17.382 06:25:09 -- accel/accel.sh@20 -- # read -r var val 00:07:18.759 06:25:11 -- accel/accel.sh@21 -- # val= 00:07:18.759 06:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # IFS=: 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # read -r var val 00:07:18.759 06:25:11 -- accel/accel.sh@21 -- # val= 00:07:18.759 06:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # IFS=: 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # read -r var val 00:07:18.759 06:25:11 -- accel/accel.sh@21 -- # val= 00:07:18.759 06:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # IFS=: 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # read -r var val 00:07:18.759 06:25:11 -- accel/accel.sh@21 -- # val= 00:07:18.759 06:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # IFS=: 00:07:18.759 06:25:11 -- accel/accel.sh@20 -- # read -r var val 00:07:18.759 ************************************ 00:07:18.759 END TEST accel_compare 00:07:18.760 ************************************ 00:07:18.760 06:25:11 -- accel/accel.sh@21 -- # val= 00:07:18.760 06:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.760 06:25:11 -- accel/accel.sh@20 -- # IFS=: 00:07:18.760 06:25:11 -- accel/accel.sh@20 -- # read -r var val 00:07:18.760 06:25:11 -- accel/accel.sh@21 -- # val= 00:07:18.760 06:25:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.760 06:25:11 -- accel/accel.sh@20 -- # IFS=: 00:07:18.760 06:25:11 -- accel/accel.sh@20 -- # read -r var val 00:07:18.760 06:25:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.760 06:25:11 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:18.760 06:25:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.760 00:07:18.760 real 0m2.992s 00:07:18.760 user 0m2.521s 00:07:18.760 sys 0m0.268s 00:07:18.760 06:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.760 06:25:11 -- common/autotest_common.sh@10 -- # set +x 00:07:18.760 06:25:11 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:18.760 06:25:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:18.760 06:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.760 06:25:11 -- common/autotest_common.sh@10 -- # set +x 00:07:18.760 ************************************ 00:07:18.760 START TEST accel_xor 00:07:18.760 ************************************ 00:07:18.760 06:25:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:18.760 06:25:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.760 06:25:11 -- accel/accel.sh@17 -- # local accel_module 00:07:18.760 06:25:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:18.760 06:25:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:18.760 06:25:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.760 06:25:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.760 06:25:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.760 06:25:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.760 06:25:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.760 06:25:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.760 06:25:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.760 06:25:11 -- accel/accel.sh@42 -- # jq -r . 00:07:18.760 [2024-10-04 06:25:11.191022] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:18.760 [2024-10-04 06:25:11.191098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70597 ] 00:07:18.760 [2024-10-04 06:25:11.323322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.760 [2024-10-04 06:25:11.395157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.137 06:25:12 -- accel/accel.sh@18 -- # out=' 00:07:20.137 SPDK Configuration: 00:07:20.137 Core mask: 0x1 00:07:20.137 00:07:20.137 Accel Perf Configuration: 00:07:20.137 Workload Type: xor 00:07:20.137 Source buffers: 2 00:07:20.137 Transfer size: 4096 bytes 00:07:20.137 Vector count 1 00:07:20.137 Module: software 00:07:20.137 Queue depth: 32 00:07:20.137 Allocate depth: 32 00:07:20.137 # threads/core: 1 00:07:20.137 Run time: 1 seconds 00:07:20.137 Verify: Yes 00:07:20.137 00:07:20.137 Running for 1 seconds... 00:07:20.137 00:07:20.137 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.137 ------------------------------------------------------------------------------------ 00:07:20.137 0,0 264096/s 1031 MiB/s 0 0 00:07:20.137 ==================================================================================== 00:07:20.137 Total 264096/s 1031 MiB/s 0 0' 00:07:20.137 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.137 06:25:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:20.137 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.137 06:25:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:20.137 06:25:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.137 06:25:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.137 06:25:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.137 06:25:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.137 06:25:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.137 06:25:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.137 06:25:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.137 06:25:12 -- accel/accel.sh@42 -- # jq -r . 00:07:20.137 [2024-10-04 06:25:12.707767] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:20.137 [2024-10-04 06:25:12.707891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70616 ] 00:07:20.397 [2024-10-04 06:25:12.844419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.397 [2024-10-04 06:25:12.911885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val= 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val= 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=0x1 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val= 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val= 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=xor 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=2 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val= 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=software 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=32 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=32 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=1 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:12 -- accel/accel.sh@21 -- # val=Yes 00:07:20.397 06:25:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:12 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:13 -- accel/accel.sh@21 -- # val= 00:07:20.397 06:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:13 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:13 -- accel/accel.sh@20 -- # read -r var val 00:07:20.397 06:25:13 -- accel/accel.sh@21 -- # val= 00:07:20.397 06:25:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.397 06:25:13 -- accel/accel.sh@20 -- # IFS=: 00:07:20.397 06:25:13 -- accel/accel.sh@20 -- # read -r var val 00:07:21.772 06:25:14 -- accel/accel.sh@21 -- # val= 00:07:21.773 06:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # IFS=: 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # read -r var val 00:07:21.773 06:25:14 -- accel/accel.sh@21 -- # val= 00:07:21.773 06:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # IFS=: 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # read -r var val 00:07:21.773 06:25:14 -- accel/accel.sh@21 -- # val= 00:07:21.773 06:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # IFS=: 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # read -r var val 00:07:21.773 06:25:14 -- accel/accel.sh@21 -- # val= 00:07:21.773 06:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # IFS=: 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # read -r var val 00:07:21.773 06:25:14 -- accel/accel.sh@21 -- # val= 00:07:21.773 06:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # IFS=: 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # read -r var val 00:07:21.773 06:25:14 -- accel/accel.sh@21 -- # val= 00:07:21.773 06:25:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # IFS=: 00:07:21.773 06:25:14 -- accel/accel.sh@20 -- # read -r var val 00:07:21.773 06:25:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.773 06:25:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:21.773 06:25:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.773 00:07:21.773 real 0m3.033s 00:07:21.773 user 0m2.549s 00:07:21.773 sys 0m0.278s 00:07:21.773 06:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.773 06:25:14 -- common/autotest_common.sh@10 -- # set +x 00:07:21.773 ************************************ 00:07:21.773 END TEST accel_xor 00:07:21.773 ************************************ 00:07:21.773 06:25:14 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:21.773 06:25:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:21.773 06:25:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.773 06:25:14 -- common/autotest_common.sh@10 -- # set +x 00:07:21.773 ************************************ 00:07:21.773 START TEST accel_xor 00:07:21.773 ************************************ 00:07:21.773 06:25:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:21.773 06:25:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.773 06:25:14 -- accel/accel.sh@17 -- # local accel_module 00:07:21.773 06:25:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:21.773 06:25:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:21.773 06:25:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.773 06:25:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.773 06:25:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.773 06:25:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.773 06:25:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.773 06:25:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.773 06:25:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.773 06:25:14 -- accel/accel.sh@42 -- # jq -r . 00:07:21.773 [2024-10-04 06:25:14.288289] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:21.773 [2024-10-04 06:25:14.288395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70651 ] 00:07:21.773 [2024-10-04 06:25:14.424320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.031 [2024-10-04 06:25:14.485936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.407 06:25:15 -- accel/accel.sh@18 -- # out=' 00:07:23.407 SPDK Configuration: 00:07:23.407 Core mask: 0x1 00:07:23.407 00:07:23.407 Accel Perf Configuration: 00:07:23.407 Workload Type: xor 00:07:23.407 Source buffers: 3 00:07:23.407 Transfer size: 4096 bytes 00:07:23.407 Vector count 1 00:07:23.407 Module: software 00:07:23.407 Queue depth: 32 00:07:23.407 Allocate depth: 32 00:07:23.407 # threads/core: 1 00:07:23.407 Run time: 1 seconds 00:07:23.407 Verify: Yes 00:07:23.407 00:07:23.407 Running for 1 seconds... 00:07:23.407 00:07:23.407 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.407 ------------------------------------------------------------------------------------ 00:07:23.407 0,0 255392/s 997 MiB/s 0 0 00:07:23.407 ==================================================================================== 00:07:23.407 Total 255392/s 997 MiB/s 0 0' 00:07:23.407 06:25:15 -- accel/accel.sh@20 -- # IFS=: 00:07:23.407 06:25:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:23.407 06:25:15 -- accel/accel.sh@20 -- # read -r var val 00:07:23.407 06:25:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:23.407 06:25:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.407 06:25:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.407 06:25:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.407 06:25:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.407 06:25:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.407 06:25:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.407 06:25:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.407 06:25:15 -- accel/accel.sh@42 -- # jq -r . 00:07:23.407 [2024-10-04 06:25:15.770484] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:23.407 [2024-10-04 06:25:15.770578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70670 ] 00:07:23.408 [2024-10-04 06:25:15.908861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.408 [2024-10-04 06:25:15.986144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val= 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val= 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=0x1 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val= 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val= 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=xor 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=3 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val= 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=software 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=32 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=32 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=1 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val=Yes 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val= 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:23.408 06:25:16 -- accel/accel.sh@21 -- # val= 00:07:23.408 06:25:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # IFS=: 00:07:23.408 06:25:16 -- accel/accel.sh@20 -- # read -r var val 00:07:24.787 06:25:17 -- accel/accel.sh@21 -- # val= 00:07:24.787 06:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # IFS=: 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # read -r var val 00:07:24.787 06:25:17 -- accel/accel.sh@21 -- # val= 00:07:24.787 06:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # IFS=: 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # read -r var val 00:07:24.787 06:25:17 -- accel/accel.sh@21 -- # val= 00:07:24.787 06:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # IFS=: 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # read -r var val 00:07:24.787 06:25:17 -- accel/accel.sh@21 -- # val= 00:07:24.787 06:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # IFS=: 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # read -r var val 00:07:24.787 06:25:17 -- accel/accel.sh@21 -- # val= 00:07:24.787 06:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # IFS=: 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # read -r var val 00:07:24.787 06:25:17 -- accel/accel.sh@21 -- # val= 00:07:24.787 06:25:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # IFS=: 00:07:24.787 06:25:17 -- accel/accel.sh@20 -- # read -r var val 00:07:24.787 06:25:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.787 06:25:17 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:24.787 06:25:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.787 00:07:24.787 real 0m2.996s 00:07:24.787 user 0m2.513s 00:07:24.787 sys 0m0.275s 00:07:24.787 06:25:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.787 06:25:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.787 ************************************ 00:07:24.787 END TEST accel_xor 00:07:24.787 ************************************ 00:07:24.787 06:25:17 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:24.787 06:25:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:24.787 06:25:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.787 06:25:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.787 ************************************ 00:07:24.787 START TEST accel_dif_verify 00:07:24.787 ************************************ 00:07:24.787 06:25:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:24.787 06:25:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.787 06:25:17 -- accel/accel.sh@17 -- # local accel_module 00:07:24.787 06:25:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:24.787 06:25:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:24.787 06:25:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.787 06:25:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.787 06:25:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.787 06:25:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.787 06:25:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.787 06:25:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.787 06:25:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.787 06:25:17 -- accel/accel.sh@42 -- # jq -r . 00:07:24.787 [2024-10-04 06:25:17.342989] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:24.787 [2024-10-04 06:25:17.343676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70705 ] 00:07:25.046 [2024-10-04 06:25:17.475293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.046 [2024-10-04 06:25:17.535613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.422 06:25:18 -- accel/accel.sh@18 -- # out=' 00:07:26.422 SPDK Configuration: 00:07:26.422 Core mask: 0x1 00:07:26.422 00:07:26.422 Accel Perf Configuration: 00:07:26.422 Workload Type: dif_verify 00:07:26.422 Vector size: 4096 bytes 00:07:26.422 Transfer size: 4096 bytes 00:07:26.422 Block size: 512 bytes 00:07:26.422 Metadata size: 8 bytes 00:07:26.422 Vector count 1 00:07:26.422 Module: software 00:07:26.422 Queue depth: 32 00:07:26.422 Allocate depth: 32 00:07:26.422 # threads/core: 1 00:07:26.422 Run time: 1 seconds 00:07:26.422 Verify: No 00:07:26.422 00:07:26.422 Running for 1 seconds... 00:07:26.422 00:07:26.422 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.422 ------------------------------------------------------------------------------------ 00:07:26.422 0,0 127040/s 504 MiB/s 0 0 00:07:26.422 ==================================================================================== 00:07:26.422 Total 127040/s 496 MiB/s 0 0' 00:07:26.422 06:25:18 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:18 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:26.422 06:25:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.422 06:25:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.422 06:25:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.422 06:25:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.422 06:25:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.422 06:25:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.422 06:25:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.422 06:25:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.422 06:25:18 -- accel/accel.sh@42 -- # jq -r . 00:07:26.422 [2024-10-04 06:25:18.801926] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:26.422 [2024-10-04 06:25:18.802225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70719 ] 00:07:26.422 [2024-10-04 06:25:18.932238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.422 [2024-10-04 06:25:18.997363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val= 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val= 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val=0x1 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val= 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val= 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val=dif_verify 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.422 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.422 06:25:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:26.422 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val= 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val=software 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val=32 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val=32 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val=1 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val=No 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val= 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:26.423 06:25:19 -- accel/accel.sh@21 -- # val= 00:07:26.423 06:25:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # IFS=: 00:07:26.423 06:25:19 -- accel/accel.sh@20 -- # read -r var val 00:07:27.799 06:25:20 -- accel/accel.sh@21 -- # val= 00:07:27.799 06:25:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # IFS=: 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # read -r var val 00:07:27.799 06:25:20 -- accel/accel.sh@21 -- # val= 00:07:27.799 06:25:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # IFS=: 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # read -r var val 00:07:27.799 06:25:20 -- accel/accel.sh@21 -- # val= 00:07:27.799 06:25:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # IFS=: 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # read -r var val 00:07:27.799 06:25:20 -- accel/accel.sh@21 -- # val= 00:07:27.799 06:25:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # IFS=: 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # read -r var val 00:07:27.799 06:25:20 -- accel/accel.sh@21 -- # val= 00:07:27.799 06:25:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # IFS=: 00:07:27.799 06:25:20 -- accel/accel.sh@20 -- # read -r var val 00:07:27.800 06:25:20 -- accel/accel.sh@21 -- # val= 00:07:27.800 06:25:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.800 06:25:20 -- accel/accel.sh@20 -- # IFS=: 00:07:27.800 06:25:20 -- accel/accel.sh@20 -- # read -r var val 00:07:27.800 06:25:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.800 06:25:20 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:27.800 06:25:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.800 00:07:27.800 real 0m2.929s 00:07:27.800 user 0m2.463s 00:07:27.800 sys 0m0.260s 00:07:27.800 06:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.800 ************************************ 00:07:27.800 END TEST accel_dif_verify 00:07:27.800 ************************************ 00:07:27.800 06:25:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.800 06:25:20 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:27.800 06:25:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:27.800 06:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.800 06:25:20 -- common/autotest_common.sh@10 -- # set +x 00:07:27.800 ************************************ 00:07:27.800 START TEST accel_dif_generate 00:07:27.800 ************************************ 00:07:27.800 06:25:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:27.800 06:25:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.800 06:25:20 -- accel/accel.sh@17 -- # local accel_module 00:07:27.800 06:25:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:27.800 06:25:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.800 06:25:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:27.800 06:25:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.800 06:25:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.800 06:25:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.800 06:25:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.800 06:25:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.800 06:25:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.800 06:25:20 -- accel/accel.sh@42 -- # jq -r . 00:07:27.800 [2024-10-04 06:25:20.327007] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:27.800 [2024-10-04 06:25:20.327118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70759 ] 00:07:27.800 [2024-10-04 06:25:20.459832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.058 [2024-10-04 06:25:20.519975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.435 06:25:21 -- accel/accel.sh@18 -- # out=' 00:07:29.435 SPDK Configuration: 00:07:29.435 Core mask: 0x1 00:07:29.435 00:07:29.435 Accel Perf Configuration: 00:07:29.435 Workload Type: dif_generate 00:07:29.435 Vector size: 4096 bytes 00:07:29.435 Transfer size: 4096 bytes 00:07:29.435 Block size: 512 bytes 00:07:29.435 Metadata size: 8 bytes 00:07:29.435 Vector count 1 00:07:29.435 Module: software 00:07:29.435 Queue depth: 32 00:07:29.435 Allocate depth: 32 00:07:29.435 # threads/core: 1 00:07:29.435 Run time: 1 seconds 00:07:29.435 Verify: No 00:07:29.435 00:07:29.435 Running for 1 seconds... 00:07:29.435 00:07:29.435 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.435 ------------------------------------------------------------------------------------ 00:07:29.435 0,0 154848/s 614 MiB/s 0 0 00:07:29.435 ==================================================================================== 00:07:29.435 Total 154848/s 604 MiB/s 0 0' 00:07:29.435 06:25:21 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:21 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:29.435 06:25:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:29.435 06:25:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.435 06:25:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.435 06:25:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.435 06:25:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.435 06:25:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.435 06:25:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.435 06:25:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.435 06:25:21 -- accel/accel.sh@42 -- # jq -r . 00:07:29.435 [2024-10-04 06:25:21.803324] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:29.435 [2024-10-04 06:25:21.803452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70773 ] 00:07:29.435 [2024-10-04 06:25:21.939372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.435 [2024-10-04 06:25:22.012699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val= 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val= 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val=0x1 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val= 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val= 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val=dif_generate 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val= 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val=software 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val=32 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val=32 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.435 06:25:22 -- accel/accel.sh@21 -- # val=1 00:07:29.435 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.435 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.436 06:25:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.436 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.436 06:25:22 -- accel/accel.sh@21 -- # val=No 00:07:29.436 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.436 06:25:22 -- accel/accel.sh@21 -- # val= 00:07:29.436 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:29.436 06:25:22 -- accel/accel.sh@21 -- # val= 00:07:29.436 06:25:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # IFS=: 00:07:29.436 06:25:22 -- accel/accel.sh@20 -- # read -r var val 00:07:30.810 06:25:23 -- accel/accel.sh@21 -- # val= 00:07:30.810 06:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # IFS=: 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # read -r var val 00:07:30.810 06:25:23 -- accel/accel.sh@21 -- # val= 00:07:30.810 06:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # IFS=: 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # read -r var val 00:07:30.810 06:25:23 -- accel/accel.sh@21 -- # val= 00:07:30.810 06:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # IFS=: 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # read -r var val 00:07:30.810 06:25:23 -- accel/accel.sh@21 -- # val= 00:07:30.810 ************************************ 00:07:30.810 END TEST accel_dif_generate 00:07:30.810 ************************************ 00:07:30.810 06:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # IFS=: 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # read -r var val 00:07:30.810 06:25:23 -- accel/accel.sh@21 -- # val= 00:07:30.810 06:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # IFS=: 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # read -r var val 00:07:30.810 06:25:23 -- accel/accel.sh@21 -- # val= 00:07:30.810 06:25:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # IFS=: 00:07:30.810 06:25:23 -- accel/accel.sh@20 -- # read -r var val 00:07:30.810 06:25:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.810 06:25:23 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:30.810 06:25:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.810 00:07:30.810 real 0m2.960s 00:07:30.810 user 0m2.484s 00:07:30.810 sys 0m0.273s 00:07:30.810 06:25:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.810 06:25:23 -- common/autotest_common.sh@10 -- # set +x 00:07:30.810 06:25:23 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:30.810 06:25:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:30.810 06:25:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.810 06:25:23 -- common/autotest_common.sh@10 -- # set +x 00:07:30.810 ************************************ 00:07:30.810 START TEST accel_dif_generate_copy 00:07:30.810 ************************************ 00:07:30.810 06:25:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:30.810 06:25:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.810 06:25:23 -- accel/accel.sh@17 -- # local accel_module 00:07:30.810 06:25:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:30.810 06:25:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:30.810 06:25:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.810 06:25:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.810 06:25:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.810 06:25:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.810 06:25:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.810 06:25:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.810 06:25:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.810 06:25:23 -- accel/accel.sh@42 -- # jq -r . 00:07:30.810 [2024-10-04 06:25:23.340309] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:30.810 [2024-10-04 06:25:23.340402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70813 ] 00:07:30.810 [2024-10-04 06:25:23.479081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.069 [2024-10-04 06:25:23.569393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.445 06:25:24 -- accel/accel.sh@18 -- # out=' 00:07:32.445 SPDK Configuration: 00:07:32.445 Core mask: 0x1 00:07:32.445 00:07:32.445 Accel Perf Configuration: 00:07:32.445 Workload Type: dif_generate_copy 00:07:32.445 Vector size: 4096 bytes 00:07:32.445 Transfer size: 4096 bytes 00:07:32.445 Vector count 1 00:07:32.445 Module: software 00:07:32.445 Queue depth: 32 00:07:32.445 Allocate depth: 32 00:07:32.445 # threads/core: 1 00:07:32.445 Run time: 1 seconds 00:07:32.445 Verify: No 00:07:32.445 00:07:32.445 Running for 1 seconds... 00:07:32.445 00:07:32.445 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.445 ------------------------------------------------------------------------------------ 00:07:32.445 0,0 116096/s 460 MiB/s 0 0 00:07:32.445 ==================================================================================== 00:07:32.445 Total 116096/s 453 MiB/s 0 0' 00:07:32.445 06:25:24 -- accel/accel.sh@20 -- # IFS=: 00:07:32.445 06:25:24 -- accel/accel.sh@20 -- # read -r var val 00:07:32.445 06:25:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.445 06:25:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.445 06:25:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.445 06:25:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.445 06:25:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.445 06:25:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.445 06:25:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.445 06:25:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.445 06:25:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.445 06:25:24 -- accel/accel.sh@42 -- # jq -r . 00:07:32.445 [2024-10-04 06:25:24.876574] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:32.445 [2024-10-04 06:25:24.876669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70827 ] 00:07:32.445 [2024-10-04 06:25:25.012295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.445 [2024-10-04 06:25:25.070078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val= 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val= 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val=0x1 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val= 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val= 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val= 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val=software 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val=32 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val=32 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val=1 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val=No 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val= 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:32.705 06:25:25 -- accel/accel.sh@21 -- # val= 00:07:32.705 06:25:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # IFS=: 00:07:32.705 06:25:25 -- accel/accel.sh@20 -- # read -r var val 00:07:34.082 06:25:26 -- accel/accel.sh@21 -- # val= 00:07:34.082 06:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.082 06:25:26 -- accel/accel.sh@20 -- # IFS=: 00:07:34.082 06:25:26 -- accel/accel.sh@20 -- # read -r var val 00:07:34.082 06:25:26 -- accel/accel.sh@21 -- # val= 00:07:34.082 06:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.082 06:25:26 -- accel/accel.sh@20 -- # IFS=: 00:07:34.082 06:25:26 -- accel/accel.sh@20 -- # read -r var val 00:07:34.083 06:25:26 -- accel/accel.sh@21 -- # val= 00:07:34.083 06:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # IFS=: 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # read -r var val 00:07:34.083 06:25:26 -- accel/accel.sh@21 -- # val= 00:07:34.083 06:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # IFS=: 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # read -r var val 00:07:34.083 06:25:26 -- accel/accel.sh@21 -- # val= 00:07:34.083 06:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # IFS=: 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # read -r var val 00:07:34.083 06:25:26 -- accel/accel.sh@21 -- # val= 00:07:34.083 06:25:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # IFS=: 00:07:34.083 06:25:26 -- accel/accel.sh@20 -- # read -r var val 00:07:34.083 ************************************ 00:07:34.083 06:25:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.083 06:25:26 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:34.083 06:25:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.083 00:07:34.083 real 0m3.013s 00:07:34.083 user 0m2.532s 00:07:34.083 sys 0m0.275s 00:07:34.083 06:25:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.083 06:25:26 -- common/autotest_common.sh@10 -- # set +x 00:07:34.083 END TEST accel_dif_generate_copy 00:07:34.083 ************************************ 00:07:34.083 06:25:26 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:34.083 06:25:26 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.083 06:25:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:34.083 06:25:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.083 06:25:26 -- common/autotest_common.sh@10 -- # set +x 00:07:34.083 ************************************ 00:07:34.083 START TEST accel_comp 00:07:34.083 ************************************ 00:07:34.083 06:25:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.083 06:25:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.083 06:25:26 -- accel/accel.sh@17 -- # local accel_module 00:07:34.083 06:25:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.083 06:25:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.083 06:25:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.083 06:25:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.083 06:25:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.083 06:25:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.083 06:25:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.083 06:25:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.083 06:25:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.083 06:25:26 -- accel/accel.sh@42 -- # jq -r . 00:07:34.083 [2024-10-04 06:25:26.413842] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:34.083 [2024-10-04 06:25:26.413938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70867 ] 00:07:34.083 [2024-10-04 06:25:26.551473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.083 [2024-10-04 06:25:26.625084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.460 06:25:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.460 00:07:35.460 SPDK Configuration: 00:07:35.460 Core mask: 0x1 00:07:35.460 00:07:35.460 Accel Perf Configuration: 00:07:35.460 Workload Type: compress 00:07:35.460 Transfer size: 4096 bytes 00:07:35.460 Vector count 1 00:07:35.460 Module: software 00:07:35.460 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.460 Queue depth: 32 00:07:35.460 Allocate depth: 32 00:07:35.460 # threads/core: 1 00:07:35.460 Run time: 1 seconds 00:07:35.460 Verify: No 00:07:35.460 00:07:35.460 Running for 1 seconds... 00:07:35.460 00:07:35.460 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.460 ------------------------------------------------------------------------------------ 00:07:35.460 0,0 58240/s 242 MiB/s 0 0 00:07:35.460 ==================================================================================== 00:07:35.460 Total 58240/s 227 MiB/s 0 0' 00:07:35.460 06:25:27 -- accel/accel.sh@20 -- # IFS=: 00:07:35.460 06:25:27 -- accel/accel.sh@20 -- # read -r var val 00:07:35.460 06:25:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.460 06:25:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.460 06:25:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.460 06:25:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.460 06:25:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.460 06:25:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.460 06:25:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.460 06:25:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.460 06:25:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.460 06:25:27 -- accel/accel.sh@42 -- # jq -r . 00:07:35.460 [2024-10-04 06:25:27.912168] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:35.460 [2024-10-04 06:25:27.912270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70881 ] 00:07:35.460 [2024-10-04 06:25:28.048058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.460 [2024-10-04 06:25:28.106795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=0x1 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=compress 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=software 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=32 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=32 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=1 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val=No 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:35.719 06:25:28 -- accel/accel.sh@21 -- # val= 00:07:35.719 06:25:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # IFS=: 00:07:35.719 06:25:28 -- accel/accel.sh@20 -- # read -r var val 00:07:37.096 06:25:29 -- accel/accel.sh@21 -- # val= 00:07:37.096 06:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # IFS=: 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # read -r var val 00:07:37.096 06:25:29 -- accel/accel.sh@21 -- # val= 00:07:37.096 06:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # IFS=: 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # read -r var val 00:07:37.096 06:25:29 -- accel/accel.sh@21 -- # val= 00:07:37.096 06:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # IFS=: 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # read -r var val 00:07:37.096 06:25:29 -- accel/accel.sh@21 -- # val= 00:07:37.096 06:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # IFS=: 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # read -r var val 00:07:37.096 06:25:29 -- accel/accel.sh@21 -- # val= 00:07:37.096 06:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # IFS=: 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # read -r var val 00:07:37.096 06:25:29 -- accel/accel.sh@21 -- # val= 00:07:37.096 ************************************ 00:07:37.096 END TEST accel_comp 00:07:37.096 ************************************ 00:07:37.096 06:25:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # IFS=: 00:07:37.096 06:25:29 -- accel/accel.sh@20 -- # read -r var val 00:07:37.096 06:25:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.096 06:25:29 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:37.096 06:25:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.096 00:07:37.096 real 0m3.005s 00:07:37.096 user 0m2.533s 00:07:37.096 sys 0m0.268s 00:07:37.096 06:25:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.096 06:25:29 -- common/autotest_common.sh@10 -- # set +x 00:07:37.096 06:25:29 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.096 06:25:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:37.096 06:25:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.096 06:25:29 -- common/autotest_common.sh@10 -- # set +x 00:07:37.096 ************************************ 00:07:37.096 START TEST accel_decomp 00:07:37.096 ************************************ 00:07:37.097 06:25:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.097 06:25:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.097 06:25:29 -- accel/accel.sh@17 -- # local accel_module 00:07:37.097 06:25:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.097 06:25:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.097 06:25:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.097 06:25:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.097 06:25:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.097 06:25:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.097 06:25:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.097 06:25:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.097 06:25:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.097 06:25:29 -- accel/accel.sh@42 -- # jq -r . 00:07:37.097 [2024-10-04 06:25:29.474756] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:37.097 [2024-10-04 06:25:29.474867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 00:07:37.097 [2024-10-04 06:25:29.611119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.097 [2024-10-04 06:25:29.682683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.473 06:25:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.473 00:07:38.473 SPDK Configuration: 00:07:38.473 Core mask: 0x1 00:07:38.473 00:07:38.473 Accel Perf Configuration: 00:07:38.473 Workload Type: decompress 00:07:38.473 Transfer size: 4096 bytes 00:07:38.473 Vector count 1 00:07:38.473 Module: software 00:07:38.473 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.473 Queue depth: 32 00:07:38.473 Allocate depth: 32 00:07:38.473 # threads/core: 1 00:07:38.473 Run time: 1 seconds 00:07:38.473 Verify: Yes 00:07:38.473 00:07:38.473 Running for 1 seconds... 00:07:38.473 00:07:38.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.473 ------------------------------------------------------------------------------------ 00:07:38.473 0,0 84064/s 154 MiB/s 0 0 00:07:38.473 ==================================================================================== 00:07:38.473 Total 84064/s 328 MiB/s 0 0' 00:07:38.473 06:25:30 -- accel/accel.sh@20 -- # IFS=: 00:07:38.473 06:25:30 -- accel/accel.sh@20 -- # read -r var val 00:07:38.473 06:25:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:38.473 06:25:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:38.473 06:25:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.473 06:25:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.473 06:25:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.473 06:25:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.473 06:25:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.473 06:25:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.473 06:25:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.473 06:25:30 -- accel/accel.sh@42 -- # jq -r . 00:07:38.473 [2024-10-04 06:25:30.973233] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:38.473 [2024-10-04 06:25:30.973562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70935 ] 00:07:38.473 [2024-10-04 06:25:31.116968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.732 [2024-10-04 06:25:31.173226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=0x1 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=decompress 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=software 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=32 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=32 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=1 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val=Yes 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 06:25:31 -- accel/accel.sh@21 -- # val= 00:07:38.732 06:25:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 06:25:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.107 06:25:32 -- accel/accel.sh@21 -- # val= 00:07:40.107 06:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # IFS=: 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # read -r var val 00:07:40.107 06:25:32 -- accel/accel.sh@21 -- # val= 00:07:40.107 06:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # IFS=: 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # read -r var val 00:07:40.107 06:25:32 -- accel/accel.sh@21 -- # val= 00:07:40.107 06:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # IFS=: 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # read -r var val 00:07:40.107 06:25:32 -- accel/accel.sh@21 -- # val= 00:07:40.107 06:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # IFS=: 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # read -r var val 00:07:40.107 06:25:32 -- accel/accel.sh@21 -- # val= 00:07:40.107 06:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # IFS=: 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # read -r var val 00:07:40.107 06:25:32 -- accel/accel.sh@21 -- # val= 00:07:40.107 06:25:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # IFS=: 00:07:40.107 06:25:32 -- accel/accel.sh@20 -- # read -r var val 00:07:40.107 06:25:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.107 06:25:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.107 06:25:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.107 00:07:40.107 real 0m3.010s 00:07:40.107 user 0m2.521s 00:07:40.107 sys 0m0.282s 00:07:40.107 06:25:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.107 06:25:32 -- common/autotest_common.sh@10 -- # set +x 00:07:40.107 ************************************ 00:07:40.107 END TEST accel_decomp 00:07:40.107 ************************************ 00:07:40.107 06:25:32 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.107 06:25:32 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:40.107 06:25:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.107 06:25:32 -- common/autotest_common.sh@10 -- # set +x 00:07:40.107 ************************************ 00:07:40.107 START TEST accel_decmop_full 00:07:40.107 ************************************ 00:07:40.107 06:25:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.107 06:25:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.107 06:25:32 -- accel/accel.sh@17 -- # local accel_module 00:07:40.107 06:25:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.107 06:25:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.107 06:25:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.107 06:25:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.107 06:25:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.107 06:25:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.107 06:25:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.107 06:25:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.107 06:25:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.107 06:25:32 -- accel/accel.sh@42 -- # jq -r . 00:07:40.107 [2024-10-04 06:25:32.532045] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:40.107 [2024-10-04 06:25:32.532705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:07:40.107 [2024-10-04 06:25:32.667953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.107 [2024-10-04 06:25:32.729666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.483 06:25:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.483 00:07:41.483 SPDK Configuration: 00:07:41.483 Core mask: 0x1 00:07:41.483 00:07:41.483 Accel Perf Configuration: 00:07:41.483 Workload Type: decompress 00:07:41.483 Transfer size: 111250 bytes 00:07:41.483 Vector count 1 00:07:41.483 Module: software 00:07:41.483 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.483 Queue depth: 32 00:07:41.483 Allocate depth: 32 00:07:41.483 # threads/core: 1 00:07:41.483 Run time: 1 seconds 00:07:41.483 Verify: Yes 00:07:41.483 00:07:41.483 Running for 1 seconds... 00:07:41.483 00:07:41.483 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.483 ------------------------------------------------------------------------------------ 00:07:41.483 0,0 5728/s 236 MiB/s 0 0 00:07:41.483 ==================================================================================== 00:07:41.483 Total 5728/s 607 MiB/s 0 0' 00:07:41.483 06:25:33 -- accel/accel.sh@20 -- # IFS=: 00:07:41.483 06:25:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.483 06:25:33 -- accel/accel.sh@20 -- # read -r var val 00:07:41.483 06:25:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:41.483 06:25:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.483 06:25:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.483 06:25:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.483 06:25:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.483 06:25:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.483 06:25:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.483 06:25:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.483 06:25:33 -- accel/accel.sh@42 -- # jq -r . 00:07:41.483 [2024-10-04 06:25:34.013692] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:41.483 [2024-10-04 06:25:34.013787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70989 ] 00:07:41.483 [2024-10-04 06:25:34.148065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.741 [2024-10-04 06:25:34.206666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=0x1 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=decompress 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=software 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=32 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=32 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=1 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val=Yes 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:41.741 06:25:34 -- accel/accel.sh@21 -- # val= 00:07:41.741 06:25:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # IFS=: 00:07:41.741 06:25:34 -- accel/accel.sh@20 -- # read -r var val 00:07:43.118 06:25:35 -- accel/accel.sh@21 -- # val= 00:07:43.118 06:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # IFS=: 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # read -r var val 00:07:43.118 06:25:35 -- accel/accel.sh@21 -- # val= 00:07:43.118 06:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # IFS=: 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # read -r var val 00:07:43.118 06:25:35 -- accel/accel.sh@21 -- # val= 00:07:43.118 06:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # IFS=: 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # read -r var val 00:07:43.118 06:25:35 -- accel/accel.sh@21 -- # val= 00:07:43.118 06:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # IFS=: 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # read -r var val 00:07:43.118 06:25:35 -- accel/accel.sh@21 -- # val= 00:07:43.118 06:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # IFS=: 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # read -r var val 00:07:43.118 06:25:35 -- accel/accel.sh@21 -- # val= 00:07:43.118 06:25:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # IFS=: 00:07:43.118 06:25:35 -- accel/accel.sh@20 -- # read -r var val 00:07:43.118 06:25:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.118 06:25:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.118 06:25:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.118 00:07:43.118 real 0m2.958s 00:07:43.118 user 0m2.485s 00:07:43.118 sys 0m0.265s 00:07:43.118 06:25:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.118 06:25:35 -- common/autotest_common.sh@10 -- # set +x 00:07:43.118 ************************************ 00:07:43.118 END TEST accel_decmop_full 00:07:43.118 ************************************ 00:07:43.118 06:25:35 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.118 06:25:35 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:43.118 06:25:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.118 06:25:35 -- common/autotest_common.sh@10 -- # set +x 00:07:43.118 ************************************ 00:07:43.118 START TEST accel_decomp_mcore 00:07:43.118 ************************************ 00:07:43.118 06:25:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.118 06:25:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.118 06:25:35 -- accel/accel.sh@17 -- # local accel_module 00:07:43.118 06:25:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.118 06:25:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.118 06:25:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.118 06:25:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.118 06:25:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.118 06:25:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.118 06:25:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.118 06:25:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.118 06:25:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.118 06:25:35 -- accel/accel.sh@42 -- # jq -r . 00:07:43.118 [2024-10-04 06:25:35.536687] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:43.118 [2024-10-04 06:25:35.536764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71024 ] 00:07:43.118 [2024-10-04 06:25:35.665314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.118 [2024-10-04 06:25:35.730087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.118 [2024-10-04 06:25:35.730206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.118 [2024-10-04 06:25:35.730334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.118 [2024-10-04 06:25:35.730334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.546 06:25:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.546 00:07:44.546 SPDK Configuration: 00:07:44.546 Core mask: 0xf 00:07:44.546 00:07:44.546 Accel Perf Configuration: 00:07:44.546 Workload Type: decompress 00:07:44.546 Transfer size: 4096 bytes 00:07:44.546 Vector count 1 00:07:44.546 Module: software 00:07:44.546 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.546 Queue depth: 32 00:07:44.546 Allocate depth: 32 00:07:44.546 # threads/core: 1 00:07:44.546 Run time: 1 seconds 00:07:44.546 Verify: Yes 00:07:44.546 00:07:44.546 Running for 1 seconds... 00:07:44.546 00:07:44.546 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.546 ------------------------------------------------------------------------------------ 00:07:44.546 0,0 57184/s 105 MiB/s 0 0 00:07:44.546 3,0 54048/s 99 MiB/s 0 0 00:07:44.546 2,0 52224/s 96 MiB/s 0 0 00:07:44.546 1,0 55488/s 102 MiB/s 0 0 00:07:44.546 ==================================================================================== 00:07:44.546 Total 218944/s 855 MiB/s 0 0' 00:07:44.546 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.546 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.546 06:25:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.546 06:25:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:44.546 06:25:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.546 06:25:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.546 06:25:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.546 06:25:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.546 06:25:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.546 06:25:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.546 06:25:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.546 06:25:37 -- accel/accel.sh@42 -- # jq -r . 00:07:44.546 [2024-10-04 06:25:37.060772] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:44.546 [2024-10-04 06:25:37.060878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71046 ] 00:07:44.546 [2024-10-04 06:25:37.196430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.805 [2024-10-04 06:25:37.259737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.805 [2024-10-04 06:25:37.259858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.805 [2024-10-04 06:25:37.260006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.805 [2024-10-04 06:25:37.260007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=0xf 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=decompress 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=software 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=32 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=32 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=1 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.805 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.805 06:25:37 -- accel/accel.sh@21 -- # val=Yes 00:07:44.805 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.806 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.806 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.806 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.806 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.806 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.806 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:44.806 06:25:37 -- accel/accel.sh@21 -- # val= 00:07:44.806 06:25:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.806 06:25:37 -- accel/accel.sh@20 -- # IFS=: 00:07:44.806 06:25:37 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@21 -- # val= 00:07:46.181 06:25:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # IFS=: 00:07:46.181 06:25:38 -- accel/accel.sh@20 -- # read -r var val 00:07:46.181 06:25:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.181 06:25:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.181 06:25:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.181 00:07:46.181 real 0m3.047s 00:07:46.181 user 0m9.626s 00:07:46.181 sys 0m0.309s 00:07:46.181 06:25:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.181 06:25:38 -- common/autotest_common.sh@10 -- # set +x 00:07:46.181 ************************************ 00:07:46.181 END TEST accel_decomp_mcore 00:07:46.181 ************************************ 00:07:46.181 06:25:38 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.181 06:25:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:46.181 06:25:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.181 06:25:38 -- common/autotest_common.sh@10 -- # set +x 00:07:46.181 ************************************ 00:07:46.181 START TEST accel_decomp_full_mcore 00:07:46.181 ************************************ 00:07:46.181 06:25:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.181 06:25:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.181 06:25:38 -- accel/accel.sh@17 -- # local accel_module 00:07:46.181 06:25:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.181 06:25:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.181 06:25:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.181 06:25:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.181 06:25:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.181 06:25:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.181 06:25:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.181 06:25:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.181 06:25:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.181 06:25:38 -- accel/accel.sh@42 -- # jq -r . 00:07:46.181 [2024-10-04 06:25:38.641111] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:46.181 [2024-10-04 06:25:38.641230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71084 ] 00:07:46.181 [2024-10-04 06:25:38.780046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.181 [2024-10-04 06:25:38.849395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.181 [2024-10-04 06:25:38.849534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.181 [2024-10-04 06:25:38.849668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.182 [2024-10-04 06:25:38.849672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.557 06:25:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.557 00:07:47.557 SPDK Configuration: 00:07:47.557 Core mask: 0xf 00:07:47.557 00:07:47.557 Accel Perf Configuration: 00:07:47.557 Workload Type: decompress 00:07:47.557 Transfer size: 111250 bytes 00:07:47.557 Vector count 1 00:07:47.557 Module: software 00:07:47.557 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.557 Queue depth: 32 00:07:47.557 Allocate depth: 32 00:07:47.557 # threads/core: 1 00:07:47.557 Run time: 1 seconds 00:07:47.557 Verify: Yes 00:07:47.557 00:07:47.557 Running for 1 seconds... 00:07:47.557 00:07:47.557 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.557 ------------------------------------------------------------------------------------ 00:07:47.557 0,0 5152/s 212 MiB/s 0 0 00:07:47.557 3,0 5120/s 211 MiB/s 0 0 00:07:47.557 2,0 4768/s 196 MiB/s 0 0 00:07:47.557 1,0 5184/s 214 MiB/s 0 0 00:07:47.557 ==================================================================================== 00:07:47.557 Total 20224/s 2145 MiB/s 0 0' 00:07:47.557 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.557 06:25:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.557 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.557 06:25:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.557 06:25:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.557 06:25:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.557 06:25:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.557 06:25:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.557 06:25:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.557 06:25:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.557 06:25:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.557 06:25:40 -- accel/accel.sh@42 -- # jq -r . 00:07:47.557 [2024-10-04 06:25:40.161713] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:47.557 [2024-10-04 06:25:40.161853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71106 ] 00:07:47.816 [2024-10-04 06:25:40.296919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.816 [2024-10-04 06:25:40.360980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.816 [2024-10-04 06:25:40.361100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.816 [2024-10-04 06:25:40.361250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.816 [2024-10-04 06:25:40.361253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val=0xf 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val=decompress 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val=software 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val=32 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val=32 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val=1 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.816 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.816 06:25:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.816 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.817 06:25:40 -- accel/accel.sh@21 -- # val=Yes 00:07:47.817 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.817 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.817 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:47.817 06:25:40 -- accel/accel.sh@21 -- # val= 00:07:47.817 06:25:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # IFS=: 00:07:47.817 06:25:40 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@21 -- # val= 00:07:49.192 06:25:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # IFS=: 00:07:49.192 06:25:41 -- accel/accel.sh@20 -- # read -r var val 00:07:49.192 06:25:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.192 06:25:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:49.192 06:25:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.192 00:07:49.192 real 0m3.078s 00:07:49.192 user 0m9.792s 00:07:49.192 sys 0m0.291s 00:07:49.192 06:25:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.192 06:25:41 -- common/autotest_common.sh@10 -- # set +x 00:07:49.192 ************************************ 00:07:49.192 END TEST accel_decomp_full_mcore 00:07:49.192 ************************************ 00:07:49.192 06:25:41 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.192 06:25:41 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:49.192 06:25:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.192 06:25:41 -- common/autotest_common.sh@10 -- # set +x 00:07:49.192 ************************************ 00:07:49.192 START TEST accel_decomp_mthread 00:07:49.192 ************************************ 00:07:49.192 06:25:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.192 06:25:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.192 06:25:41 -- accel/accel.sh@17 -- # local accel_module 00:07:49.192 06:25:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.192 06:25:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:49.192 06:25:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.192 06:25:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.192 06:25:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.192 06:25:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.192 06:25:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.192 06:25:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.192 06:25:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.192 06:25:41 -- accel/accel.sh@42 -- # jq -r . 00:07:49.192 [2024-10-04 06:25:41.766380] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:49.192 [2024-10-04 06:25:41.766476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71144 ] 00:07:49.451 [2024-10-04 06:25:41.888667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.451 [2024-10-04 06:25:41.963299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.826 06:25:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.826 00:07:50.826 SPDK Configuration: 00:07:50.826 Core mask: 0x1 00:07:50.826 00:07:50.826 Accel Perf Configuration: 00:07:50.826 Workload Type: decompress 00:07:50.826 Transfer size: 4096 bytes 00:07:50.826 Vector count 1 00:07:50.826 Module: software 00:07:50.826 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.826 Queue depth: 32 00:07:50.826 Allocate depth: 32 00:07:50.826 # threads/core: 2 00:07:50.826 Run time: 1 seconds 00:07:50.826 Verify: Yes 00:07:50.826 00:07:50.826 Running for 1 seconds... 00:07:50.826 00:07:50.826 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.826 ------------------------------------------------------------------------------------ 00:07:50.826 0,1 41856/s 77 MiB/s 0 0 00:07:50.826 0,0 41696/s 76 MiB/s 0 0 00:07:50.826 ==================================================================================== 00:07:50.826 Total 83552/s 326 MiB/s 0 0' 00:07:50.826 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:50.826 06:25:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.826 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:50.826 06:25:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:50.826 06:25:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.826 06:25:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.826 06:25:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.826 06:25:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.826 06:25:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.826 06:25:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.826 06:25:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.826 06:25:43 -- accel/accel.sh@42 -- # jq -r . 00:07:50.826 [2024-10-04 06:25:43.253252] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:50.826 [2024-10-04 06:25:43.254033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71163 ] 00:07:50.826 [2024-10-04 06:25:43.396004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.826 [2024-10-04 06:25:43.456872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=0x1 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=decompress 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=software 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=32 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=32 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=2 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val=Yes 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:51.084 06:25:43 -- accel/accel.sh@21 -- # val= 00:07:51.084 06:25:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # IFS=: 00:07:51.084 06:25:43 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@21 -- # val= 00:07:52.460 06:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # IFS=: 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@21 -- # val= 00:07:52.460 06:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # IFS=: 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@21 -- # val= 00:07:52.460 06:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # IFS=: 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@21 -- # val= 00:07:52.460 06:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # IFS=: 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@21 -- # val= 00:07:52.460 06:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # IFS=: 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@21 -- # val= 00:07:52.460 06:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # IFS=: 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@21 -- # val= 00:07:52.460 06:25:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # IFS=: 00:07:52.460 06:25:44 -- accel/accel.sh@20 -- # read -r var val 00:07:52.460 06:25:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.460 06:25:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.460 06:25:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.460 00:07:52.460 real 0m3.013s 00:07:52.460 user 0m1.257s 00:07:52.460 sys 0m0.143s 00:07:52.460 06:25:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.460 06:25:44 -- common/autotest_common.sh@10 -- # set +x 00:07:52.460 ************************************ 00:07:52.460 END TEST accel_decomp_mthread 00:07:52.460 ************************************ 00:07:52.460 06:25:44 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.460 06:25:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:52.460 06:25:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.460 06:25:44 -- common/autotest_common.sh@10 -- # set +x 00:07:52.460 ************************************ 00:07:52.460 START TEST accel_deomp_full_mthread 00:07:52.460 ************************************ 00:07:52.461 06:25:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.461 06:25:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.461 06:25:44 -- accel/accel.sh@17 -- # local accel_module 00:07:52.461 06:25:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.461 06:25:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.461 06:25:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.461 06:25:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.461 06:25:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.461 06:25:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.461 06:25:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.461 06:25:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.461 06:25:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.461 06:25:44 -- accel/accel.sh@42 -- # jq -r . 00:07:52.461 [2024-10-04 06:25:44.832465] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:52.461 [2024-10-04 06:25:44.832552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71198 ] 00:07:52.461 [2024-10-04 06:25:44.953662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.461 [2024-10-04 06:25:45.017743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.837 06:25:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:53.837 00:07:53.837 SPDK Configuration: 00:07:53.837 Core mask: 0x1 00:07:53.837 00:07:53.837 Accel Perf Configuration: 00:07:53.837 Workload Type: decompress 00:07:53.837 Transfer size: 111250 bytes 00:07:53.837 Vector count 1 00:07:53.837 Module: software 00:07:53.837 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.837 Queue depth: 32 00:07:53.837 Allocate depth: 32 00:07:53.837 # threads/core: 2 00:07:53.837 Run time: 1 seconds 00:07:53.837 Verify: Yes 00:07:53.837 00:07:53.837 Running for 1 seconds... 00:07:53.837 00:07:53.837 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.837 ------------------------------------------------------------------------------------ 00:07:53.837 0,1 2816/s 116 MiB/s 0 0 00:07:53.837 0,0 2784/s 115 MiB/s 0 0 00:07:53.837 ==================================================================================== 00:07:53.837 Total 5600/s 594 MiB/s 0 0' 00:07:53.837 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:53.837 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:53.837 06:25:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.837 06:25:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.837 06:25:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.837 06:25:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.837 06:25:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.837 06:25:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.837 06:25:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.837 06:25:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.837 06:25:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.837 06:25:46 -- accel/accel.sh@42 -- # jq -r . 00:07:53.837 [2024-10-04 06:25:46.367307] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:53.837 [2024-10-04 06:25:46.367439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71217 ] 00:07:53.837 [2024-10-04 06:25:46.503661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.094 [2024-10-04 06:25:46.569778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.094 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.094 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.094 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=0x1 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=decompress 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=software 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=32 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=32 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=2 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val=Yes 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:54.095 06:25:46 -- accel/accel.sh@21 -- # val= 00:07:54.095 06:25:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # IFS=: 00:07:54.095 06:25:46 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@21 -- # val= 00:07:55.469 06:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # IFS=: 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@21 -- # val= 00:07:55.469 06:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # IFS=: 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@21 -- # val= 00:07:55.469 06:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # IFS=: 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@21 -- # val= 00:07:55.469 06:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # IFS=: 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@21 -- # val= 00:07:55.469 06:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # IFS=: 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@21 -- # val= 00:07:55.469 06:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # IFS=: 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@21 -- # val= 00:07:55.469 06:25:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # IFS=: 00:07:55.469 06:25:47 -- accel/accel.sh@20 -- # read -r var val 00:07:55.469 06:25:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.469 06:25:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:55.469 06:25:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.469 ************************************ 00:07:55.469 END TEST accel_deomp_full_mthread 00:07:55.469 ************************************ 00:07:55.469 00:07:55.469 real 0m3.049s 00:07:55.469 user 0m2.567s 00:07:55.469 sys 0m0.280s 00:07:55.469 06:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.469 06:25:47 -- common/autotest_common.sh@10 -- # set +x 00:07:55.469 06:25:47 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:55.469 06:25:47 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.469 06:25:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:55.469 06:25:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.469 06:25:47 -- common/autotest_common.sh@10 -- # set +x 00:07:55.469 06:25:47 -- accel/accel.sh@129 -- # build_accel_config 00:07:55.469 06:25:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.469 06:25:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.469 06:25:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.469 06:25:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.469 06:25:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.469 06:25:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.469 06:25:47 -- accel/accel.sh@42 -- # jq -r . 00:07:55.469 ************************************ 00:07:55.469 START TEST accel_dif_functional_tests 00:07:55.469 ************************************ 00:07:55.469 06:25:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.469 [2024-10-04 06:25:47.961133] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:55.469 [2024-10-04 06:25:47.961235] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71253 ] 00:07:55.469 [2024-10-04 06:25:48.084269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.727 [2024-10-04 06:25:48.151801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.727 [2024-10-04 06:25:48.151946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.727 [2024-10-04 06:25:48.151955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.727 00:07:55.727 00:07:55.727 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.727 http://cunit.sourceforge.net/ 00:07:55.727 00:07:55.727 00:07:55.727 Suite: accel_dif 00:07:55.727 Test: verify: DIF generated, GUARD check ...passed 00:07:55.727 Test: verify: DIF generated, APPTAG check ...passed 00:07:55.727 Test: verify: DIF generated, REFTAG check ...passed 00:07:55.727 Test: verify: DIF not generated, GUARD check ...passed 00:07:55.727 Test: verify: DIF not generated, APPTAG check ...[2024-10-04 06:25:48.273764] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.727 [2024-10-04 06:25:48.273982] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.727 [2024-10-04 06:25:48.274021] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.727 passed 00:07:55.727 Test: verify: DIF not generated, REFTAG check ...passed 00:07:55.727 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:55.727 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:55.727 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-10-04 06:25:48.274050] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.727 [2024-10-04 06:25:48.274077] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.727 [2024-10-04 06:25:48.274102] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.727 [2024-10-04 06:25:48.274156] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:55.727 passed 00:07:55.727 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:55.727 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:55.727 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:55.727 Test: generate copy: DIF generated, GUARD check ...[2024-10-04 06:25:48.274603] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:55.727 passed 00:07:55.727 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:55.727 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:55.727 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:55.727 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:55.727 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:55.727 Test: generate copy: iovecs-len validate ...[2024-10-04 06:25:48.275056] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:55.727 passed 00:07:55.727 Test: generate copy: buffer alignment validate ...passed 00:07:55.727 00:07:55.727 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.727 suites 1 1 n/a 0 0 00:07:55.727 tests 20 20 20 0 0 00:07:55.727 asserts 204 204 204 0 n/a 00:07:55.727 00:07:55.727 Elapsed time = 0.005 seconds 00:07:55.985 00:07:55.985 real 0m0.630s 00:07:55.985 user 0m0.936s 00:07:55.985 sys 0m0.191s 00:07:55.985 06:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.985 ************************************ 00:07:55.985 END TEST accel_dif_functional_tests 00:07:55.985 ************************************ 00:07:55.985 06:25:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.985 00:07:55.985 real 1m4.830s 00:07:55.985 user 1m8.628s 00:07:55.985 sys 0m7.256s 00:07:55.985 ************************************ 00:07:55.985 END TEST accel 00:07:55.985 ************************************ 00:07:55.985 06:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.985 06:25:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.985 06:25:48 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:55.985 06:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.985 06:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.985 06:25:48 -- common/autotest_common.sh@10 -- # set +x 00:07:55.985 ************************************ 00:07:55.985 START TEST accel_rpc 00:07:55.985 ************************************ 00:07:55.985 06:25:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:56.242 * Looking for test storage... 00:07:56.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:56.242 06:25:48 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:56.242 06:25:48 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71322 00:07:56.242 06:25:48 -- accel/accel_rpc.sh@15 -- # waitforlisten 71322 00:07:56.242 06:25:48 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:56.242 06:25:48 -- common/autotest_common.sh@819 -- # '[' -z 71322 ']' 00:07:56.242 06:25:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.242 06:25:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.242 06:25:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.242 06:25:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.242 06:25:48 -- common/autotest_common.sh@10 -- # set +x 00:07:56.242 [2024-10-04 06:25:48.798521] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:56.242 [2024-10-04 06:25:48.798915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71322 ] 00:07:56.500 [2024-10-04 06:25:48.936079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.500 [2024-10-04 06:25:49.002259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.500 [2024-10-04 06:25:49.002738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.500 06:25:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:56.500 06:25:49 -- common/autotest_common.sh@852 -- # return 0 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:56.500 06:25:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.500 06:25:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.500 06:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:56.500 ************************************ 00:07:56.500 START TEST accel_assign_opcode 00:07:56.500 ************************************ 00:07:56.500 06:25:49 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:56.500 06:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.500 06:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:56.500 [2024-10-04 06:25:49.067390] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:56.500 06:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:56.500 06:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.500 06:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:56.500 [2024-10-04 06:25:49.079396] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:56.500 06:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.500 06:25:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:56.500 06:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.500 06:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:56.759 06:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.759 06:25:49 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:56.759 06:25:49 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:56.759 06:25:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.759 06:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:56.759 06:25:49 -- accel/accel_rpc.sh@42 -- # grep software 00:07:56.759 06:25:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.759 software 00:07:56.759 ************************************ 00:07:56.759 END TEST accel_assign_opcode 00:07:56.759 ************************************ 00:07:56.759 00:07:56.759 real 0m0.379s 00:07:56.759 user 0m0.059s 00:07:56.759 sys 0m0.010s 00:07:56.759 06:25:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.759 06:25:49 -- common/autotest_common.sh@10 -- # set +x 00:07:57.017 06:25:49 -- accel/accel_rpc.sh@55 -- # killprocess 71322 00:07:57.017 06:25:49 -- common/autotest_common.sh@926 -- # '[' -z 71322 ']' 00:07:57.017 06:25:49 -- common/autotest_common.sh@930 -- # kill -0 71322 00:07:57.017 06:25:49 -- common/autotest_common.sh@931 -- # uname 00:07:57.017 06:25:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.017 06:25:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71322 00:07:57.017 killing process with pid 71322 00:07:57.017 06:25:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.017 06:25:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.017 06:25:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71322' 00:07:57.017 06:25:49 -- common/autotest_common.sh@945 -- # kill 71322 00:07:57.017 06:25:49 -- common/autotest_common.sh@950 -- # wait 71322 00:07:57.584 00:07:57.584 real 0m1.419s 00:07:57.584 user 0m1.263s 00:07:57.584 sys 0m0.502s 00:07:57.584 ************************************ 00:07:57.584 END TEST accel_rpc 00:07:57.584 ************************************ 00:07:57.584 06:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.584 06:25:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.584 06:25:50 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:57.584 06:25:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.584 06:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.584 06:25:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.584 ************************************ 00:07:57.584 START TEST app_cmdline 00:07:57.584 ************************************ 00:07:57.584 06:25:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:57.584 * Looking for test storage... 00:07:57.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:57.584 06:25:50 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:57.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.584 06:25:50 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71413 00:07:57.584 06:25:50 -- app/cmdline.sh@18 -- # waitforlisten 71413 00:07:57.584 06:25:50 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:57.584 06:25:50 -- common/autotest_common.sh@819 -- # '[' -z 71413 ']' 00:07:57.584 06:25:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.584 06:25:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:57.584 06:25:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.584 06:25:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:57.584 06:25:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.584 [2024-10-04 06:25:50.260517] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:07:57.584 [2024-10-04 06:25:50.260837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71413 ] 00:07:57.855 [2024-10-04 06:25:50.390760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.855 [2024-10-04 06:25:50.465007] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.855 [2024-10-04 06:25:50.465537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.807 06:25:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.807 06:25:51 -- common/autotest_common.sh@852 -- # return 0 00:07:58.807 06:25:51 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:59.066 { 00:07:59.066 "fields": { 00:07:59.066 "commit": "726a04d70", 00:07:59.066 "major": 24, 00:07:59.066 "minor": 1, 00:07:59.066 "patch": 1, 00:07:59.066 "suffix": "-pre" 00:07:59.066 }, 00:07:59.066 "version": "SPDK v24.01.1-pre git sha1 726a04d70" 00:07:59.066 } 00:07:59.066 06:25:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.066 06:25:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.066 06:25:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.066 06:25:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.066 06:25:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.066 06:25:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.066 06:25:51 -- app/cmdline.sh@26 -- # sort 00:07:59.066 06:25:51 -- common/autotest_common.sh@10 -- # set +x 00:07:59.066 06:25:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.066 06:25:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.066 06:25:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:59.066 06:25:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:59.066 06:25:51 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.066 06:25:51 -- common/autotest_common.sh@640 -- # local es=0 00:07:59.066 06:25:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.066 06:25:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.066 06:25:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.066 06:25:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.066 06:25:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.066 06:25:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.066 06:25:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.066 06:25:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.066 06:25:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:59.066 06:25:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.325 2024/10/04 06:25:51 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:59.325 request: 00:07:59.325 { 00:07:59.325 "method": "env_dpdk_get_mem_stats", 00:07:59.325 "params": {} 00:07:59.325 } 00:07:59.325 Got JSON-RPC error response 00:07:59.325 GoRPCClient: error on JSON-RPC call 00:07:59.325 06:25:51 -- common/autotest_common.sh@643 -- # es=1 00:07:59.325 06:25:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:59.325 06:25:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:59.325 06:25:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:59.325 06:25:51 -- app/cmdline.sh@1 -- # killprocess 71413 00:07:59.325 06:25:51 -- common/autotest_common.sh@926 -- # '[' -z 71413 ']' 00:07:59.325 06:25:51 -- common/autotest_common.sh@930 -- # kill -0 71413 00:07:59.325 06:25:51 -- common/autotest_common.sh@931 -- # uname 00:07:59.325 06:25:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:59.325 06:25:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71413 00:07:59.325 06:25:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:59.325 06:25:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:59.325 06:25:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71413' 00:07:59.325 killing process with pid 71413 00:07:59.325 06:25:51 -- common/autotest_common.sh@945 -- # kill 71413 00:07:59.325 06:25:51 -- common/autotest_common.sh@950 -- # wait 71413 00:07:59.892 00:07:59.892 real 0m2.256s 00:07:59.892 user 0m2.676s 00:07:59.892 sys 0m0.577s 00:07:59.892 06:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.892 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:07:59.892 ************************************ 00:07:59.892 END TEST app_cmdline 00:07:59.892 ************************************ 00:07:59.892 06:25:52 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:59.892 06:25:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.892 06:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.892 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:07:59.893 ************************************ 00:07:59.893 START TEST version 00:07:59.893 ************************************ 00:07:59.893 06:25:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:59.893 * Looking for test storage... 00:07:59.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.893 06:25:52 -- app/version.sh@17 -- # get_header_version major 00:07:59.893 06:25:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.893 06:25:52 -- app/version.sh@14 -- # cut -f2 00:07:59.893 06:25:52 -- app/version.sh@14 -- # tr -d '"' 00:07:59.893 06:25:52 -- app/version.sh@17 -- # major=24 00:07:59.893 06:25:52 -- app/version.sh@18 -- # get_header_version minor 00:07:59.893 06:25:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.893 06:25:52 -- app/version.sh@14 -- # cut -f2 00:07:59.893 06:25:52 -- app/version.sh@14 -- # tr -d '"' 00:07:59.893 06:25:52 -- app/version.sh@18 -- # minor=1 00:07:59.893 06:25:52 -- app/version.sh@19 -- # get_header_version patch 00:07:59.893 06:25:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.893 06:25:52 -- app/version.sh@14 -- # cut -f2 00:07:59.893 06:25:52 -- app/version.sh@14 -- # tr -d '"' 00:07:59.893 06:25:52 -- app/version.sh@19 -- # patch=1 00:07:59.893 06:25:52 -- app/version.sh@20 -- # get_header_version suffix 00:07:59.893 06:25:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:59.893 06:25:52 -- app/version.sh@14 -- # cut -f2 00:07:59.893 06:25:52 -- app/version.sh@14 -- # tr -d '"' 00:07:59.893 06:25:52 -- app/version.sh@20 -- # suffix=-pre 00:07:59.893 06:25:52 -- app/version.sh@22 -- # version=24.1 00:07:59.893 06:25:52 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:59.893 06:25:52 -- app/version.sh@25 -- # version=24.1.1 00:07:59.893 06:25:52 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:59.893 06:25:52 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:59.893 06:25:52 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.152 06:25:52 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:00.152 06:25:52 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:00.152 ************************************ 00:08:00.152 END TEST version 00:08:00.152 ************************************ 00:08:00.152 00:08:00.152 real 0m0.159s 00:08:00.152 user 0m0.090s 00:08:00.152 sys 0m0.103s 00:08:00.152 06:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.152 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:08:00.152 06:25:52 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@204 -- # uname -s 00:08:00.152 06:25:52 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:00.152 06:25:52 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:00.152 06:25:52 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:00.152 06:25:52 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@268 -- # timing_exit lib 00:08:00.152 06:25:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:00.152 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:08:00.152 06:25:52 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:08:00.152 06:25:52 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:08:00.152 06:25:52 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.152 06:25:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.152 06:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.152 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:08:00.152 ************************************ 00:08:00.152 START TEST nvmf_tcp 00:08:00.152 ************************************ 00:08:00.152 06:25:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.152 * Looking for test storage... 00:08:00.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:00.152 06:25:52 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:00.152 06:25:52 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:00.152 06:25:52 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.152 06:25:52 -- nvmf/common.sh@7 -- # uname -s 00:08:00.152 06:25:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.152 06:25:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.152 06:25:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.152 06:25:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.152 06:25:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.152 06:25:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.152 06:25:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.152 06:25:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.152 06:25:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.152 06:25:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.152 06:25:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:00.152 06:25:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:00.152 06:25:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.152 06:25:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.152 06:25:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.152 06:25:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.152 06:25:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.152 06:25:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.153 06:25:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.153 06:25:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.153 06:25:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.153 06:25:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.412 06:25:52 -- paths/export.sh@5 -- # export PATH 00:08:00.412 06:25:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.412 06:25:52 -- nvmf/common.sh@46 -- # : 0 00:08:00.412 06:25:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.412 06:25:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.412 06:25:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.412 06:25:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.412 06:25:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.412 06:25:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.412 06:25:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.412 06:25:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.412 06:25:52 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:00.412 06:25:52 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:00.412 06:25:52 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:00.412 06:25:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.412 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:08:00.412 06:25:52 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:00.412 06:25:52 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:00.412 06:25:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:00.412 06:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.412 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:08:00.412 ************************************ 00:08:00.412 START TEST nvmf_example 00:08:00.412 ************************************ 00:08:00.412 06:25:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:00.412 * Looking for test storage... 00:08:00.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:00.412 06:25:52 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.412 06:25:52 -- nvmf/common.sh@7 -- # uname -s 00:08:00.412 06:25:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.412 06:25:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.412 06:25:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.412 06:25:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.413 06:25:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.413 06:25:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.413 06:25:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.413 06:25:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.413 06:25:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.413 06:25:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.413 06:25:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:00.413 06:25:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:00.413 06:25:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.413 06:25:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.413 06:25:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.413 06:25:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.413 06:25:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.413 06:25:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.413 06:25:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.413 06:25:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.413 06:25:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.413 06:25:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.413 06:25:52 -- paths/export.sh@5 -- # export PATH 00:08:00.413 06:25:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.413 06:25:52 -- nvmf/common.sh@46 -- # : 0 00:08:00.413 06:25:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:00.413 06:25:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:00.413 06:25:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:00.413 06:25:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.413 06:25:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.413 06:25:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:00.413 06:25:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:00.413 06:25:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:00.413 06:25:52 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:00.413 06:25:52 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:00.413 06:25:52 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:00.413 06:25:52 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:00.413 06:25:52 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:00.413 06:25:52 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:00.413 06:25:52 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:00.413 06:25:52 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:00.413 06:25:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.413 06:25:52 -- common/autotest_common.sh@10 -- # set +x 00:08:00.413 06:25:52 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:00.413 06:25:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:00.413 06:25:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.413 06:25:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:00.413 06:25:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:00.413 06:25:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:00.413 06:25:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.413 06:25:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.413 06:25:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.413 06:25:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:00.413 06:25:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:00.413 06:25:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:00.413 06:25:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:00.413 06:25:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:00.413 06:25:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:00.413 06:25:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.413 06:25:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.413 06:25:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:00.413 06:25:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:00.413 06:25:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:00.413 06:25:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:00.413 06:25:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:00.413 06:25:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.413 06:25:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:00.413 06:25:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:00.413 06:25:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:00.413 06:25:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:00.413 06:25:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:00.413 Cannot find device "nvmf_init_br" 00:08:00.413 06:25:52 -- nvmf/common.sh@153 -- # true 00:08:00.413 06:25:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:00.413 Cannot find device "nvmf_tgt_br" 00:08:00.413 06:25:53 -- nvmf/common.sh@154 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:00.413 Cannot find device "nvmf_tgt_br2" 00:08:00.413 06:25:53 -- nvmf/common.sh@155 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:00.413 Cannot find device "nvmf_init_br" 00:08:00.413 06:25:53 -- nvmf/common.sh@156 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:00.413 Cannot find device "nvmf_tgt_br" 00:08:00.413 06:25:53 -- nvmf/common.sh@157 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:00.413 Cannot find device "nvmf_tgt_br2" 00:08:00.413 06:25:53 -- nvmf/common.sh@158 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:00.413 Cannot find device "nvmf_br" 00:08:00.413 06:25:53 -- nvmf/common.sh@159 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:00.413 Cannot find device "nvmf_init_if" 00:08:00.413 06:25:53 -- nvmf/common.sh@160 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:00.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.413 06:25:53 -- nvmf/common.sh@161 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:00.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:00.413 06:25:53 -- nvmf/common.sh@162 -- # true 00:08:00.413 06:25:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:00.672 06:25:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:00.672 06:25:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:00.672 06:25:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:00.672 06:25:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:00.672 06:25:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:00.672 06:25:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:00.672 06:25:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:00.672 06:25:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:00.672 06:25:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:00.672 06:25:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:00.672 06:25:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:00.672 06:25:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:00.672 06:25:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:00.672 06:25:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:00.672 06:25:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:00.672 06:25:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:00.672 06:25:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:00.672 06:25:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:00.672 06:25:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:00.672 06:25:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:00.931 06:25:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:00.931 06:25:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:00.931 06:25:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:00.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:08:00.931 00:08:00.931 --- 10.0.0.2 ping statistics --- 00:08:00.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.931 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:00.931 06:25:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:00.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:00.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:08:00.931 00:08:00.931 --- 10.0.0.3 ping statistics --- 00:08:00.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.931 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:00.931 06:25:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:00.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:00.931 00:08:00.931 --- 10.0.0.1 ping statistics --- 00:08:00.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.931 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:00.931 06:25:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.931 06:25:53 -- nvmf/common.sh@421 -- # return 0 00:08:00.931 06:25:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:00.931 06:25:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.931 06:25:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:00.931 06:25:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:00.931 06:25:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.931 06:25:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:00.931 06:25:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:00.931 06:25:53 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:00.931 06:25:53 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:00.931 06:25:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.931 06:25:53 -- common/autotest_common.sh@10 -- # set +x 00:08:00.931 06:25:53 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:00.931 06:25:53 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:00.931 06:25:53 -- target/nvmf_example.sh@34 -- # nvmfpid=71768 00:08:00.931 06:25:53 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.931 06:25:53 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:00.931 06:25:53 -- target/nvmf_example.sh@36 -- # waitforlisten 71768 00:08:00.931 06:25:53 -- common/autotest_common.sh@819 -- # '[' -z 71768 ']' 00:08:00.931 06:25:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.931 06:25:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:00.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.931 06:25:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.932 06:25:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:00.932 06:25:53 -- common/autotest_common.sh@10 -- # set +x 00:08:01.867 06:25:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:01.867 06:25:54 -- common/autotest_common.sh@852 -- # return 0 00:08:01.867 06:25:54 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:01.867 06:25:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:01.867 06:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.126 06:25:54 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.126 06:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.126 06:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.126 06:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.126 06:25:54 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.126 06:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.126 06:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.126 06:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.126 06:25:54 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:02.126 06:25:54 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.126 06:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.126 06:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.126 06:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.126 06:25:54 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:02.126 06:25:54 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.126 06:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.126 06:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.126 06:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.126 06:25:54 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.126 06:25:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.126 06:25:54 -- common/autotest_common.sh@10 -- # set +x 00:08:02.126 06:25:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.126 06:25:54 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:02.126 06:25:54 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:14.334 Initializing NVMe Controllers 00:08:14.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.334 Initialization complete. Launching workers. 00:08:14.334 ======================================================== 00:08:14.334 Latency(us) 00:08:14.334 Device Information : IOPS MiB/s Average min max 00:08:14.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15925.71 62.21 4019.65 648.12 45124.39 00:08:14.334 ======================================================== 00:08:14.334 Total : 15925.71 62.21 4019.65 648.12 45124.39 00:08:14.334 00:08:14.334 06:26:04 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:14.334 06:26:04 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:14.334 06:26:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:14.334 06:26:04 -- nvmf/common.sh@116 -- # sync 00:08:14.334 06:26:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:14.334 06:26:04 -- nvmf/common.sh@119 -- # set +e 00:08:14.334 06:26:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:14.334 06:26:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:14.334 rmmod nvme_tcp 00:08:14.334 rmmod nvme_fabrics 00:08:14.334 rmmod nvme_keyring 00:08:14.334 06:26:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:14.334 06:26:05 -- nvmf/common.sh@123 -- # set -e 00:08:14.334 06:26:05 -- nvmf/common.sh@124 -- # return 0 00:08:14.334 06:26:05 -- nvmf/common.sh@477 -- # '[' -n 71768 ']' 00:08:14.334 06:26:05 -- nvmf/common.sh@478 -- # killprocess 71768 00:08:14.334 06:26:05 -- common/autotest_common.sh@926 -- # '[' -z 71768 ']' 00:08:14.334 06:26:05 -- common/autotest_common.sh@930 -- # kill -0 71768 00:08:14.334 06:26:05 -- common/autotest_common.sh@931 -- # uname 00:08:14.334 06:26:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:14.334 06:26:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71768 00:08:14.334 06:26:05 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:14.334 06:26:05 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:14.334 killing process with pid 71768 00:08:14.334 06:26:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71768' 00:08:14.334 06:26:05 -- common/autotest_common.sh@945 -- # kill 71768 00:08:14.334 06:26:05 -- common/autotest_common.sh@950 -- # wait 71768 00:08:14.334 nvmf threads initialize successfully 00:08:14.334 bdev subsystem init successfully 00:08:14.334 created a nvmf target service 00:08:14.334 create targets's poll groups done 00:08:14.334 all subsystems of target started 00:08:14.334 nvmf target is running 00:08:14.334 all subsystems of target stopped 00:08:14.334 destroy targets's poll groups done 00:08:14.334 destroyed the nvmf target service 00:08:14.334 bdev subsystem finish successfully 00:08:14.334 nvmf threads destroy successfully 00:08:14.334 06:26:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:14.334 06:26:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:14.334 06:26:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:14.334 06:26:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.334 06:26:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:14.334 06:26:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.334 06:26:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.334 06:26:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.334 06:26:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:14.334 06:26:05 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:14.334 06:26:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.334 06:26:05 -- common/autotest_common.sh@10 -- # set +x 00:08:14.334 00:08:14.334 real 0m12.518s 00:08:14.334 user 0m44.985s 00:08:14.334 sys 0m1.932s 00:08:14.334 06:26:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.334 06:26:05 -- common/autotest_common.sh@10 -- # set +x 00:08:14.334 ************************************ 00:08:14.334 END TEST nvmf_example 00:08:14.334 ************************************ 00:08:14.334 06:26:05 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:14.334 06:26:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:14.335 06:26:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.335 06:26:05 -- common/autotest_common.sh@10 -- # set +x 00:08:14.335 ************************************ 00:08:14.335 START TEST nvmf_filesystem 00:08:14.335 ************************************ 00:08:14.335 06:26:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:14.335 * Looking for test storage... 00:08:14.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.335 06:26:05 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:14.335 06:26:05 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:14.335 06:26:05 -- common/autotest_common.sh@34 -- # set -e 00:08:14.335 06:26:05 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:14.335 06:26:05 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:14.335 06:26:05 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:14.335 06:26:05 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:14.335 06:26:05 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:14.335 06:26:05 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:14.335 06:26:05 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:14.335 06:26:05 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:14.335 06:26:05 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:14.335 06:26:05 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:14.335 06:26:05 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:14.335 06:26:05 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:14.335 06:26:05 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:14.335 06:26:05 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:14.335 06:26:05 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:14.335 06:26:05 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:14.335 06:26:05 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:14.335 06:26:05 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:14.335 06:26:05 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:14.335 06:26:05 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:14.335 06:26:05 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:14.335 06:26:05 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:14.335 06:26:05 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:14.335 06:26:05 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:14.335 06:26:05 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:14.335 06:26:05 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:14.335 06:26:05 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:14.335 06:26:05 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:14.335 06:26:05 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:14.335 06:26:05 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:14.335 06:26:05 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:14.335 06:26:05 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:14.335 06:26:05 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:14.335 06:26:05 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:14.335 06:26:05 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:14.335 06:26:05 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:14.335 06:26:05 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:14.335 06:26:05 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:14.335 06:26:05 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:14.335 06:26:05 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:14.335 06:26:05 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:14.335 06:26:05 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:14.335 06:26:05 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:14.335 06:26:05 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:14.335 06:26:05 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:14.335 06:26:05 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:14.335 06:26:05 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:14.335 06:26:05 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:14.335 06:26:05 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:14.335 06:26:05 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:14.335 06:26:05 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:14.335 06:26:05 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:14.335 06:26:05 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:14.335 06:26:05 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:14.335 06:26:05 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:14.335 06:26:05 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:14.335 06:26:05 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:14.335 06:26:05 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:14.335 06:26:05 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:14.335 06:26:05 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:14.335 06:26:05 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:14.335 06:26:05 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:14.335 06:26:05 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:14.335 06:26:05 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:14.335 06:26:05 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.335 06:26:05 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:14.335 06:26:05 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:14.335 06:26:05 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:14.335 06:26:05 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:14.335 06:26:05 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:14.335 06:26:05 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:14.335 06:26:05 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:14.335 06:26:05 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:14.335 06:26:05 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:14.335 06:26:05 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:14.335 06:26:05 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:14.335 06:26:05 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:14.335 06:26:05 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:14.335 06:26:05 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:14.335 06:26:05 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:14.335 06:26:05 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:14.335 06:26:05 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:14.335 06:26:05 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:14.335 06:26:05 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:14.335 06:26:05 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:14.335 06:26:05 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:14.335 06:26:05 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:14.335 06:26:05 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:14.335 06:26:05 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:14.335 06:26:05 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:14.335 06:26:05 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:14.335 06:26:05 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:14.335 06:26:05 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:14.335 06:26:05 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:14.335 06:26:05 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:14.335 06:26:05 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:14.335 06:26:05 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:14.335 06:26:05 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:14.335 06:26:05 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:14.335 #define SPDK_CONFIG_H 00:08:14.335 #define SPDK_CONFIG_APPS 1 00:08:14.335 #define SPDK_CONFIG_ARCH native 00:08:14.335 #undef SPDK_CONFIG_ASAN 00:08:14.335 #define SPDK_CONFIG_AVAHI 1 00:08:14.335 #undef SPDK_CONFIG_CET 00:08:14.335 #define SPDK_CONFIG_COVERAGE 1 00:08:14.335 #define SPDK_CONFIG_CROSS_PREFIX 00:08:14.335 #undef SPDK_CONFIG_CRYPTO 00:08:14.335 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:14.335 #undef SPDK_CONFIG_CUSTOMOCF 00:08:14.335 #undef SPDK_CONFIG_DAOS 00:08:14.335 #define SPDK_CONFIG_DAOS_DIR 00:08:14.335 #define SPDK_CONFIG_DEBUG 1 00:08:14.335 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:14.335 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:14.335 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:14.335 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.335 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:14.335 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:14.335 #define SPDK_CONFIG_EXAMPLES 1 00:08:14.335 #undef SPDK_CONFIG_FC 00:08:14.335 #define SPDK_CONFIG_FC_PATH 00:08:14.335 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:14.335 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:14.335 #undef SPDK_CONFIG_FUSE 00:08:14.335 #undef SPDK_CONFIG_FUZZER 00:08:14.335 #define SPDK_CONFIG_FUZZER_LIB 00:08:14.335 #define SPDK_CONFIG_GOLANG 1 00:08:14.335 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:14.335 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:14.335 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:14.335 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:14.335 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:14.335 #define SPDK_CONFIG_IDXD 1 00:08:14.335 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:14.335 #undef SPDK_CONFIG_IPSEC_MB 00:08:14.335 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:14.335 #define SPDK_CONFIG_ISAL 1 00:08:14.335 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:14.335 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:14.335 #define SPDK_CONFIG_LIBDIR 00:08:14.335 #undef SPDK_CONFIG_LTO 00:08:14.335 #define SPDK_CONFIG_MAX_LCORES 00:08:14.335 #define SPDK_CONFIG_NVME_CUSE 1 00:08:14.335 #undef SPDK_CONFIG_OCF 00:08:14.335 #define SPDK_CONFIG_OCF_PATH 00:08:14.335 #define SPDK_CONFIG_OPENSSL_PATH 00:08:14.335 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:14.335 #undef SPDK_CONFIG_PGO_USE 00:08:14.335 #define SPDK_CONFIG_PREFIX /usr/local 00:08:14.335 #undef SPDK_CONFIG_RAID5F 00:08:14.335 #undef SPDK_CONFIG_RBD 00:08:14.335 #define SPDK_CONFIG_RDMA 1 00:08:14.335 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:14.335 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:14.335 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:14.336 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:14.336 #define SPDK_CONFIG_SHARED 1 00:08:14.336 #undef SPDK_CONFIG_SMA 00:08:14.336 #define SPDK_CONFIG_TESTS 1 00:08:14.336 #undef SPDK_CONFIG_TSAN 00:08:14.336 #define SPDK_CONFIG_UBLK 1 00:08:14.336 #define SPDK_CONFIG_UBSAN 1 00:08:14.336 #undef SPDK_CONFIG_UNIT_TESTS 00:08:14.336 #undef SPDK_CONFIG_URING 00:08:14.336 #define SPDK_CONFIG_URING_PATH 00:08:14.336 #undef SPDK_CONFIG_URING_ZNS 00:08:14.336 #define SPDK_CONFIG_USDT 1 00:08:14.336 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:14.336 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:14.336 #undef SPDK_CONFIG_VFIO_USER 00:08:14.336 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:14.336 #define SPDK_CONFIG_VHOST 1 00:08:14.336 #define SPDK_CONFIG_VIRTIO 1 00:08:14.336 #undef SPDK_CONFIG_VTUNE 00:08:14.336 #define SPDK_CONFIG_VTUNE_DIR 00:08:14.336 #define SPDK_CONFIG_WERROR 1 00:08:14.336 #define SPDK_CONFIG_WPDK_DIR 00:08:14.336 #undef SPDK_CONFIG_XNVME 00:08:14.336 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:14.336 06:26:05 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:14.336 06:26:05 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.336 06:26:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.336 06:26:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.336 06:26:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.336 06:26:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.336 06:26:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.336 06:26:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.336 06:26:05 -- paths/export.sh@5 -- # export PATH 00:08:14.336 06:26:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.336 06:26:05 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:14.336 06:26:05 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:14.336 06:26:05 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:14.336 06:26:05 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:14.336 06:26:05 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:14.336 06:26:05 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:14.336 06:26:05 -- pm/common@16 -- # TEST_TAG=N/A 00:08:14.336 06:26:05 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:14.336 06:26:05 -- common/autotest_common.sh@52 -- # : 1 00:08:14.336 06:26:05 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:14.336 06:26:05 -- common/autotest_common.sh@56 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:14.336 06:26:05 -- common/autotest_common.sh@58 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:14.336 06:26:05 -- common/autotest_common.sh@60 -- # : 1 00:08:14.336 06:26:05 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:14.336 06:26:05 -- common/autotest_common.sh@62 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:14.336 06:26:05 -- common/autotest_common.sh@64 -- # : 00:08:14.336 06:26:05 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:14.336 06:26:05 -- common/autotest_common.sh@66 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:14.336 06:26:05 -- common/autotest_common.sh@68 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:14.336 06:26:05 -- common/autotest_common.sh@70 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:14.336 06:26:05 -- common/autotest_common.sh@72 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:14.336 06:26:05 -- common/autotest_common.sh@74 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:14.336 06:26:05 -- common/autotest_common.sh@76 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:14.336 06:26:05 -- common/autotest_common.sh@78 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:14.336 06:26:05 -- common/autotest_common.sh@80 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:14.336 06:26:05 -- common/autotest_common.sh@82 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:14.336 06:26:05 -- common/autotest_common.sh@84 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:14.336 06:26:05 -- common/autotest_common.sh@86 -- # : 1 00:08:14.336 06:26:05 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:14.336 06:26:05 -- common/autotest_common.sh@88 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:14.336 06:26:05 -- common/autotest_common.sh@90 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:14.336 06:26:05 -- common/autotest_common.sh@92 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:14.336 06:26:05 -- common/autotest_common.sh@94 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:14.336 06:26:05 -- common/autotest_common.sh@96 -- # : tcp 00:08:14.336 06:26:05 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:14.336 06:26:05 -- common/autotest_common.sh@98 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:14.336 06:26:05 -- common/autotest_common.sh@100 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:14.336 06:26:05 -- common/autotest_common.sh@102 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:14.336 06:26:05 -- common/autotest_common.sh@104 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:14.336 06:26:05 -- common/autotest_common.sh@106 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:14.336 06:26:05 -- common/autotest_common.sh@108 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:14.336 06:26:05 -- common/autotest_common.sh@110 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:14.336 06:26:05 -- common/autotest_common.sh@112 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:14.336 06:26:05 -- common/autotest_common.sh@114 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:14.336 06:26:05 -- common/autotest_common.sh@116 -- # : 1 00:08:14.336 06:26:05 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:14.336 06:26:05 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:14.336 06:26:05 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:14.336 06:26:05 -- common/autotest_common.sh@120 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:14.336 06:26:05 -- common/autotest_common.sh@122 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:14.336 06:26:05 -- common/autotest_common.sh@124 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:14.336 06:26:05 -- common/autotest_common.sh@126 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:14.336 06:26:05 -- common/autotest_common.sh@128 -- # : 0 00:08:14.336 06:26:05 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:14.336 06:26:05 -- common/autotest_common.sh@130 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:14.337 06:26:05 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:14.337 06:26:05 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:14.337 06:26:05 -- common/autotest_common.sh@134 -- # : true 00:08:14.337 06:26:05 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:14.337 06:26:05 -- common/autotest_common.sh@136 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:14.337 06:26:05 -- common/autotest_common.sh@138 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:14.337 06:26:05 -- common/autotest_common.sh@140 -- # : 1 00:08:14.337 06:26:05 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:14.337 06:26:05 -- common/autotest_common.sh@142 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:14.337 06:26:05 -- common/autotest_common.sh@144 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:14.337 06:26:05 -- common/autotest_common.sh@146 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:14.337 06:26:05 -- common/autotest_common.sh@148 -- # : 00:08:14.337 06:26:05 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:14.337 06:26:05 -- common/autotest_common.sh@150 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:14.337 06:26:05 -- common/autotest_common.sh@152 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:14.337 06:26:05 -- common/autotest_common.sh@154 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:14.337 06:26:05 -- common/autotest_common.sh@156 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:14.337 06:26:05 -- common/autotest_common.sh@158 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:14.337 06:26:05 -- common/autotest_common.sh@160 -- # : 0 00:08:14.337 06:26:05 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:14.337 06:26:05 -- common/autotest_common.sh@163 -- # : 00:08:14.337 06:26:05 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:14.337 06:26:05 -- common/autotest_common.sh@165 -- # : 1 00:08:14.337 06:26:05 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:14.337 06:26:05 -- common/autotest_common.sh@167 -- # : 1 00:08:14.337 06:26:05 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:14.337 06:26:05 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:14.337 06:26:05 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:14.337 06:26:05 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:14.337 06:26:05 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:14.337 06:26:05 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:14.337 06:26:05 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:14.337 06:26:05 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:14.337 06:26:05 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:14.337 06:26:05 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:14.337 06:26:05 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:14.337 06:26:05 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:14.337 06:26:05 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:14.337 06:26:05 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:14.337 06:26:05 -- common/autotest_common.sh@196 -- # cat 00:08:14.337 06:26:05 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:14.337 06:26:05 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:14.337 06:26:05 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:14.337 06:26:05 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:14.337 06:26:05 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:14.337 06:26:05 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:14.337 06:26:05 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:14.337 06:26:05 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:14.337 06:26:05 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:14.337 06:26:05 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:14.337 06:26:05 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:14.337 06:26:05 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:14.337 06:26:05 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:14.337 06:26:05 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:14.337 06:26:05 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:14.337 06:26:05 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:14.337 06:26:05 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:14.337 06:26:05 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:14.337 06:26:05 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:14.337 06:26:05 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:14.337 06:26:05 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:14.337 06:26:05 -- common/autotest_common.sh@249 -- # valgrind= 00:08:14.337 06:26:05 -- common/autotest_common.sh@255 -- # uname -s 00:08:14.337 06:26:05 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:14.337 06:26:05 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:14.337 06:26:05 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:14.337 06:26:05 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:14.337 06:26:05 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:14.337 06:26:05 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:14.337 06:26:05 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:14.337 06:26:05 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:08:14.337 06:26:05 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:14.337 06:26:05 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:14.337 06:26:05 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:14.337 06:26:05 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:14.337 06:26:05 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:14.337 06:26:05 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:14.337 06:26:05 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:14.337 06:26:05 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:14.337 06:26:05 -- common/autotest_common.sh@309 -- # [[ -z 72015 ]] 00:08:14.337 06:26:05 -- common/autotest_common.sh@309 -- # kill -0 72015 00:08:14.337 06:26:05 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:14.337 06:26:05 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:14.338 06:26:05 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:14.338 06:26:05 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:14.338 06:26:05 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:14.338 06:26:05 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:14.338 06:26:05 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:14.338 06:26:05 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.IeHByK 00:08:14.338 06:26:05 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:14.338 06:26:05 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.IeHByK/tests/target /tmp/spdk.IeHByK 00:08:14.338 06:26:05 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@318 -- # df -T 00:08:14.338 06:26:05 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=13435068416 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=6147072000 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265167872 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6266425344 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=2493755392 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2506571776 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=12816384 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=13435068416 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=6147072000 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266290176 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6266429440 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=139264 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=840085504 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=103477248 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=91617280 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=12990464 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253269504 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253281792 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:14.338 06:26:05 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # avails["$mount"]=97655181312 00:08:14.338 06:26:05 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:08:14.338 06:26:05 -- common/autotest_common.sh@354 -- # uses["$mount"]=2047598592 00:08:14.338 06:26:05 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:14.338 06:26:05 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:14.338 * Looking for test storage... 00:08:14.338 06:26:05 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:14.338 06:26:05 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:14.338 06:26:05 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.338 06:26:05 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:14.338 06:26:05 -- common/autotest_common.sh@363 -- # mount=/home 00:08:14.338 06:26:05 -- common/autotest_common.sh@365 -- # target_space=13435068416 00:08:14.338 06:26:05 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:14.338 06:26:05 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:14.338 06:26:05 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.338 06:26:05 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.338 06:26:05 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.338 06:26:05 -- common/autotest_common.sh@380 -- # return 0 00:08:14.338 06:26:05 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:14.338 06:26:05 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:14.338 06:26:05 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:14.338 06:26:05 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:14.338 06:26:05 -- common/autotest_common.sh@1672 -- # true 00:08:14.338 06:26:05 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:14.338 06:26:05 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:14.338 06:26:05 -- common/autotest_common.sh@27 -- # exec 00:08:14.338 06:26:05 -- common/autotest_common.sh@29 -- # exec 00:08:14.338 06:26:05 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:14.338 06:26:05 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:14.338 06:26:05 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:14.338 06:26:05 -- common/autotest_common.sh@18 -- # set -x 00:08:14.338 06:26:05 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.338 06:26:05 -- nvmf/common.sh@7 -- # uname -s 00:08:14.338 06:26:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.338 06:26:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.338 06:26:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.338 06:26:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.338 06:26:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.338 06:26:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.338 06:26:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.338 06:26:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.338 06:26:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.338 06:26:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.338 06:26:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:14.338 06:26:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:14.338 06:26:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.338 06:26:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.338 06:26:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.338 06:26:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.338 06:26:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.338 06:26:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.338 06:26:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.338 06:26:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.339 06:26:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.339 06:26:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.339 06:26:05 -- paths/export.sh@5 -- # export PATH 00:08:14.339 06:26:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.339 06:26:05 -- nvmf/common.sh@46 -- # : 0 00:08:14.339 06:26:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.339 06:26:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.339 06:26:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.339 06:26:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.339 06:26:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.339 06:26:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.339 06:26:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.339 06:26:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.339 06:26:05 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:14.339 06:26:05 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:14.339 06:26:05 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:14.339 06:26:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:14.339 06:26:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.339 06:26:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:14.339 06:26:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:14.339 06:26:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:14.339 06:26:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.339 06:26:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.339 06:26:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.339 06:26:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:14.339 06:26:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:14.339 06:26:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:14.339 06:26:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:14.339 06:26:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:14.339 06:26:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:14.339 06:26:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.339 06:26:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.339 06:26:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.339 06:26:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:14.339 06:26:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.339 06:26:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.339 06:26:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.339 06:26:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.339 06:26:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.339 06:26:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.339 06:26:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.339 06:26:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.339 06:26:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:14.339 06:26:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:14.339 Cannot find device "nvmf_tgt_br" 00:08:14.339 06:26:05 -- nvmf/common.sh@154 -- # true 00:08:14.339 06:26:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.339 Cannot find device "nvmf_tgt_br2" 00:08:14.339 06:26:05 -- nvmf/common.sh@155 -- # true 00:08:14.339 06:26:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:14.339 06:26:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:14.339 Cannot find device "nvmf_tgt_br" 00:08:14.339 06:26:05 -- nvmf/common.sh@157 -- # true 00:08:14.339 06:26:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:14.339 Cannot find device "nvmf_tgt_br2" 00:08:14.339 06:26:05 -- nvmf/common.sh@158 -- # true 00:08:14.339 06:26:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:14.339 06:26:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:14.339 06:26:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.339 06:26:05 -- nvmf/common.sh@161 -- # true 00:08:14.339 06:26:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.339 06:26:05 -- nvmf/common.sh@162 -- # true 00:08:14.339 06:26:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.339 06:26:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.339 06:26:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.339 06:26:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.339 06:26:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.339 06:26:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.339 06:26:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.339 06:26:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:14.339 06:26:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:14.339 06:26:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:14.339 06:26:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:14.339 06:26:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:14.339 06:26:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:14.339 06:26:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.339 06:26:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.339 06:26:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.339 06:26:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:14.339 06:26:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:14.339 06:26:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.339 06:26:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.339 06:26:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.339 06:26:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.339 06:26:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.339 06:26:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:14.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:08:14.340 00:08:14.340 --- 10.0.0.2 ping statistics --- 00:08:14.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.340 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:14.340 06:26:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:14.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:14.340 00:08:14.340 --- 10.0.0.3 ping statistics --- 00:08:14.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.340 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:14.340 06:26:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:14.340 00:08:14.340 --- 10.0.0.1 ping statistics --- 00:08:14.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.340 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:14.340 06:26:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.340 06:26:05 -- nvmf/common.sh@421 -- # return 0 00:08:14.340 06:26:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:14.340 06:26:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.340 06:26:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:14.340 06:26:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:14.340 06:26:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.340 06:26:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:14.340 06:26:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:14.340 06:26:06 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:14.340 06:26:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:14.340 06:26:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.340 06:26:06 -- common/autotest_common.sh@10 -- # set +x 00:08:14.340 ************************************ 00:08:14.340 START TEST nvmf_filesystem_no_in_capsule 00:08:14.340 ************************************ 00:08:14.340 06:26:06 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:14.340 06:26:06 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:14.340 06:26:06 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:14.340 06:26:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:14.340 06:26:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.340 06:26:06 -- common/autotest_common.sh@10 -- # set +x 00:08:14.340 06:26:06 -- nvmf/common.sh@469 -- # nvmfpid=72173 00:08:14.340 06:26:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.340 06:26:06 -- nvmf/common.sh@470 -- # waitforlisten 72173 00:08:14.340 06:26:06 -- common/autotest_common.sh@819 -- # '[' -z 72173 ']' 00:08:14.340 06:26:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.340 06:26:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:14.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.340 06:26:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.340 06:26:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:14.340 06:26:06 -- common/autotest_common.sh@10 -- # set +x 00:08:14.340 [2024-10-04 06:26:06.081257] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:08:14.340 [2024-10-04 06:26:06.081346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.340 [2024-10-04 06:26:06.221749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.340 [2024-10-04 06:26:06.303562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.340 [2024-10-04 06:26:06.303764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.340 [2024-10-04 06:26:06.303781] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.340 [2024-10-04 06:26:06.303793] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.340 [2024-10-04 06:26:06.304306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.340 [2024-10-04 06:26:06.304436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.340 [2024-10-04 06:26:06.304739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.340 [2024-10-04 06:26:06.304775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.599 06:26:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.599 06:26:07 -- common/autotest_common.sh@852 -- # return 0 00:08:14.599 06:26:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:14.599 06:26:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.599 06:26:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.599 06:26:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.599 06:26:07 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:14.599 06:26:07 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:14.599 06:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.599 06:26:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.599 [2024-10-04 06:26:07.154782] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.599 06:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.599 06:26:07 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:14.599 06:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.599 06:26:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.858 Malloc1 00:08:14.858 06:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.858 06:26:07 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:14.858 06:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.858 06:26:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.858 06:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.858 06:26:07 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:14.858 06:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.858 06:26:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.858 06:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.858 06:26:07 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.858 06:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.858 06:26:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.858 [2024-10-04 06:26:07.412096] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.858 06:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.858 06:26:07 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:14.858 06:26:07 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:14.858 06:26:07 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:14.858 06:26:07 -- common/autotest_common.sh@1359 -- # local bs 00:08:14.858 06:26:07 -- common/autotest_common.sh@1360 -- # local nb 00:08:14.858 06:26:07 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:14.858 06:26:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.858 06:26:07 -- common/autotest_common.sh@10 -- # set +x 00:08:14.858 06:26:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.858 06:26:07 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:14.858 { 00:08:14.858 "aliases": [ 00:08:14.858 "5d48f564-5417-47da-a7bb-5cf40a4b88d6" 00:08:14.858 ], 00:08:14.858 "assigned_rate_limits": { 00:08:14.858 "r_mbytes_per_sec": 0, 00:08:14.858 "rw_ios_per_sec": 0, 00:08:14.858 "rw_mbytes_per_sec": 0, 00:08:14.858 "w_mbytes_per_sec": 0 00:08:14.858 }, 00:08:14.858 "block_size": 512, 00:08:14.858 "claim_type": "exclusive_write", 00:08:14.858 "claimed": true, 00:08:14.858 "driver_specific": {}, 00:08:14.858 "memory_domains": [ 00:08:14.858 { 00:08:14.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.858 "dma_device_type": 2 00:08:14.858 } 00:08:14.858 ], 00:08:14.858 "name": "Malloc1", 00:08:14.858 "num_blocks": 1048576, 00:08:14.858 "product_name": "Malloc disk", 00:08:14.858 "supported_io_types": { 00:08:14.858 "abort": true, 00:08:14.858 "compare": false, 00:08:14.858 "compare_and_write": false, 00:08:14.858 "flush": true, 00:08:14.858 "nvme_admin": false, 00:08:14.858 "nvme_io": false, 00:08:14.858 "read": true, 00:08:14.858 "reset": true, 00:08:14.858 "unmap": true, 00:08:14.858 "write": true, 00:08:14.858 "write_zeroes": true 00:08:14.858 }, 00:08:14.858 "uuid": "5d48f564-5417-47da-a7bb-5cf40a4b88d6", 00:08:14.858 "zoned": false 00:08:14.858 } 00:08:14.858 ]' 00:08:14.858 06:26:07 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:14.858 06:26:07 -- common/autotest_common.sh@1362 -- # bs=512 00:08:14.858 06:26:07 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:15.116 06:26:07 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:15.116 06:26:07 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:15.116 06:26:07 -- common/autotest_common.sh@1367 -- # echo 512 00:08:15.116 06:26:07 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:15.116 06:26:07 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.116 06:26:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.116 06:26:07 -- common/autotest_common.sh@1177 -- # local i=0 00:08:15.116 06:26:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.116 06:26:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:15.116 06:26:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:17.666 06:26:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:17.666 06:26:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:17.666 06:26:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.666 06:26:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:17.666 06:26:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.666 06:26:09 -- common/autotest_common.sh@1187 -- # return 0 00:08:17.666 06:26:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:17.666 06:26:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:17.666 06:26:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:17.666 06:26:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:17.666 06:26:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:17.666 06:26:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:17.666 06:26:09 -- setup/common.sh@80 -- # echo 536870912 00:08:17.666 06:26:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:17.666 06:26:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:17.666 06:26:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:17.666 06:26:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:17.666 06:26:09 -- target/filesystem.sh@69 -- # partprobe 00:08:17.666 06:26:09 -- target/filesystem.sh@70 -- # sleep 1 00:08:18.601 06:26:10 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:18.601 06:26:10 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:18.601 06:26:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:18.601 06:26:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.601 06:26:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.601 ************************************ 00:08:18.601 START TEST filesystem_ext4 00:08:18.601 ************************************ 00:08:18.601 06:26:10 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:18.601 06:26:10 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:18.601 06:26:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.601 06:26:10 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:18.601 06:26:10 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:18.601 06:26:10 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:18.601 06:26:10 -- common/autotest_common.sh@904 -- # local i=0 00:08:18.601 06:26:10 -- common/autotest_common.sh@905 -- # local force 00:08:18.601 06:26:10 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:18.601 06:26:10 -- common/autotest_common.sh@908 -- # force=-F 00:08:18.601 06:26:10 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:18.601 mke2fs 1.47.0 (5-Feb-2023) 00:08:18.601 Discarding device blocks: 0/522240 done 00:08:18.601 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:18.601 Filesystem UUID: 7d3a7062-055d-404f-a365-6ae971853f99 00:08:18.601 Superblock backups stored on blocks: 00:08:18.601 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:18.601 00:08:18.601 Allocating group tables: 0/64 done 00:08:18.601 Writing inode tables: 0/64 done 00:08:18.601 Creating journal (8192 blocks): done 00:08:18.601 Writing superblocks and filesystem accounting information: 0/64 done 00:08:18.601 00:08:18.601 06:26:11 -- common/autotest_common.sh@921 -- # return 0 00:08:18.601 06:26:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.871 06:26:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.871 06:26:16 -- target/filesystem.sh@25 -- # sync 00:08:24.130 06:26:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.130 06:26:16 -- target/filesystem.sh@27 -- # sync 00:08:24.130 06:26:16 -- target/filesystem.sh@29 -- # i=0 00:08:24.130 06:26:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.130 06:26:16 -- target/filesystem.sh@37 -- # kill -0 72173 00:08:24.130 06:26:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.130 06:26:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.130 06:26:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.130 06:26:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.130 ************************************ 00:08:24.130 END TEST filesystem_ext4 00:08:24.130 ************************************ 00:08:24.130 00:08:24.130 real 0m5.695s 00:08:24.130 user 0m0.023s 00:08:24.130 sys 0m0.073s 00:08:24.130 06:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.130 06:26:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 06:26:16 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:24.130 06:26:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.130 06:26:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.130 06:26:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 ************************************ 00:08:24.130 START TEST filesystem_btrfs 00:08:24.130 ************************************ 00:08:24.130 06:26:16 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:24.130 06:26:16 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:24.130 06:26:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.130 06:26:16 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:24.130 06:26:16 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:24.130 06:26:16 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:24.130 06:26:16 -- common/autotest_common.sh@904 -- # local i=0 00:08:24.130 06:26:16 -- common/autotest_common.sh@905 -- # local force 00:08:24.130 06:26:16 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:24.130 06:26:16 -- common/autotest_common.sh@910 -- # force=-f 00:08:24.130 06:26:16 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:24.390 btrfs-progs v6.8.1 00:08:24.390 See https://btrfs.readthedocs.io for more information. 00:08:24.390 00:08:24.390 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:24.390 NOTE: several default settings have changed in version 5.15, please make sure 00:08:24.390 this does not affect your deployments: 00:08:24.390 - DUP for metadata (-m dup) 00:08:24.390 - enabled no-holes (-O no-holes) 00:08:24.390 - enabled free-space-tree (-R free-space-tree) 00:08:24.390 00:08:24.390 Label: (null) 00:08:24.390 UUID: 409acb7b-8d19-4925-957c-77c7d7c135d1 00:08:24.390 Node size: 16384 00:08:24.390 Sector size: 4096 (CPU page size: 4096) 00:08:24.390 Filesystem size: 510.00MiB 00:08:24.390 Block group profiles: 00:08:24.390 Data: single 8.00MiB 00:08:24.390 Metadata: DUP 32.00MiB 00:08:24.390 System: DUP 8.00MiB 00:08:24.390 SSD detected: yes 00:08:24.390 Zoned device: no 00:08:24.390 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:24.390 Checksum: crc32c 00:08:24.390 Number of devices: 1 00:08:24.390 Devices: 00:08:24.390 ID SIZE PATH 00:08:24.390 1 510.00MiB /dev/nvme0n1p1 00:08:24.390 00:08:24.390 06:26:16 -- common/autotest_common.sh@921 -- # return 0 00:08:24.390 06:26:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:24.390 06:26:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:24.390 06:26:16 -- target/filesystem.sh@25 -- # sync 00:08:24.390 06:26:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:24.390 06:26:16 -- target/filesystem.sh@27 -- # sync 00:08:24.390 06:26:16 -- target/filesystem.sh@29 -- # i=0 00:08:24.390 06:26:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:24.390 06:26:16 -- target/filesystem.sh@37 -- # kill -0 72173 00:08:24.390 06:26:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:24.390 06:26:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:24.390 06:26:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:24.390 06:26:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:24.390 ************************************ 00:08:24.390 END TEST filesystem_btrfs 00:08:24.390 ************************************ 00:08:24.390 00:08:24.390 real 0m0.304s 00:08:24.390 user 0m0.021s 00:08:24.390 sys 0m0.073s 00:08:24.390 06:26:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.390 06:26:17 -- common/autotest_common.sh@10 -- # set +x 00:08:24.390 06:26:17 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:24.390 06:26:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.390 06:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.390 06:26:17 -- common/autotest_common.sh@10 -- # set +x 00:08:24.390 ************************************ 00:08:24.390 START TEST filesystem_xfs 00:08:24.390 ************************************ 00:08:24.390 06:26:17 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:24.390 06:26:17 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:24.390 06:26:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.390 06:26:17 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:24.390 06:26:17 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:24.390 06:26:17 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:24.390 06:26:17 -- common/autotest_common.sh@904 -- # local i=0 00:08:24.390 06:26:17 -- common/autotest_common.sh@905 -- # local force 00:08:24.390 06:26:17 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:24.390 06:26:17 -- common/autotest_common.sh@910 -- # force=-f 00:08:24.390 06:26:17 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:24.649 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:24.649 = sectsz=512 attr=2, projid32bit=1 00:08:24.649 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:24.649 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:24.649 data = bsize=4096 blocks=130560, imaxpct=25 00:08:24.649 = sunit=0 swidth=0 blks 00:08:24.649 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:24.649 log =internal log bsize=4096 blocks=16384, version=2 00:08:24.649 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:24.649 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:25.587 Discarding blocks...Done. 00:08:25.587 06:26:17 -- common/autotest_common.sh@921 -- # return 0 00:08:25.587 06:26:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.120 06:26:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.120 06:26:20 -- target/filesystem.sh@25 -- # sync 00:08:28.120 06:26:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.120 06:26:20 -- target/filesystem.sh@27 -- # sync 00:08:28.120 06:26:20 -- target/filesystem.sh@29 -- # i=0 00:08:28.120 06:26:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.120 06:26:20 -- target/filesystem.sh@37 -- # kill -0 72173 00:08:28.120 06:26:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.120 06:26:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.120 06:26:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.120 06:26:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.120 ************************************ 00:08:28.120 END TEST filesystem_xfs 00:08:28.120 ************************************ 00:08:28.120 00:08:28.120 real 0m3.280s 00:08:28.120 user 0m0.030s 00:08:28.120 sys 0m0.061s 00:08:28.120 06:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.120 06:26:20 -- common/autotest_common.sh@10 -- # set +x 00:08:28.120 06:26:20 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:28.120 06:26:20 -- target/filesystem.sh@93 -- # sync 00:08:28.120 06:26:20 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:28.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.120 06:26:20 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:28.120 06:26:20 -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.120 06:26:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:28.120 06:26:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.120 06:26:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.120 06:26:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:28.120 06:26:20 -- common/autotest_common.sh@1210 -- # return 0 00:08:28.120 06:26:20 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.120 06:26:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.120 06:26:20 -- common/autotest_common.sh@10 -- # set +x 00:08:28.120 06:26:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.120 06:26:20 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:28.120 06:26:20 -- target/filesystem.sh@101 -- # killprocess 72173 00:08:28.120 06:26:20 -- common/autotest_common.sh@926 -- # '[' -z 72173 ']' 00:08:28.120 06:26:20 -- common/autotest_common.sh@930 -- # kill -0 72173 00:08:28.120 06:26:20 -- common/autotest_common.sh@931 -- # uname 00:08:28.120 06:26:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:28.120 06:26:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72173 00:08:28.120 06:26:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:28.120 06:26:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:28.120 killing process with pid 72173 00:08:28.120 06:26:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72173' 00:08:28.120 06:26:20 -- common/autotest_common.sh@945 -- # kill 72173 00:08:28.120 06:26:20 -- common/autotest_common.sh@950 -- # wait 72173 00:08:28.379 06:26:20 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:28.379 00:08:28.379 real 0m14.939s 00:08:28.379 user 0m57.532s 00:08:28.379 sys 0m1.839s 00:08:28.379 06:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.379 06:26:20 -- common/autotest_common.sh@10 -- # set +x 00:08:28.379 ************************************ 00:08:28.379 END TEST nvmf_filesystem_no_in_capsule 00:08:28.379 ************************************ 00:08:28.379 06:26:21 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:28.379 06:26:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:28.379 06:26:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.379 06:26:21 -- common/autotest_common.sh@10 -- # set +x 00:08:28.379 ************************************ 00:08:28.379 START TEST nvmf_filesystem_in_capsule 00:08:28.379 ************************************ 00:08:28.379 06:26:21 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:28.379 06:26:21 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:28.379 06:26:21 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:28.379 06:26:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:28.379 06:26:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:28.379 06:26:21 -- common/autotest_common.sh@10 -- # set +x 00:08:28.379 06:26:21 -- nvmf/common.sh@469 -- # nvmfpid=72552 00:08:28.379 06:26:21 -- nvmf/common.sh@470 -- # waitforlisten 72552 00:08:28.379 06:26:21 -- common/autotest_common.sh@819 -- # '[' -z 72552 ']' 00:08:28.379 06:26:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.379 06:26:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:28.379 06:26:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.379 06:26:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.379 06:26:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:28.379 06:26:21 -- common/autotest_common.sh@10 -- # set +x 00:08:28.638 [2024-10-04 06:26:21.091783] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:08:28.638 [2024-10-04 06:26:21.091907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.638 [2024-10-04 06:26:21.230045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.638 [2024-10-04 06:26:21.292011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.638 [2024-10-04 06:26:21.292136] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.638 [2024-10-04 06:26:21.292148] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.638 [2024-10-04 06:26:21.292157] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.638 [2024-10-04 06:26:21.292366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.638 [2024-10-04 06:26:21.292457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.638 [2024-10-04 06:26:21.292776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.638 [2024-10-04 06:26:21.292781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.575 06:26:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:29.575 06:26:22 -- common/autotest_common.sh@852 -- # return 0 00:08:29.575 06:26:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:29.575 06:26:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:29.575 06:26:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.575 06:26:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.575 06:26:22 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:29.575 06:26:22 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:29.575 06:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.575 06:26:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.575 [2024-10-04 06:26:22.163138] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.575 06:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.575 06:26:22 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:29.575 06:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.575 06:26:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 Malloc1 00:08:29.834 06:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.834 06:26:22 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:29.834 06:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.834 06:26:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 06:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.834 06:26:22 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.834 06:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.834 06:26:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 06:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.834 06:26:22 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.834 06:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.834 06:26:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 [2024-10-04 06:26:22.372808] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.834 06:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.834 06:26:22 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:29.834 06:26:22 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:29.834 06:26:22 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:29.834 06:26:22 -- common/autotest_common.sh@1359 -- # local bs 00:08:29.834 06:26:22 -- common/autotest_common.sh@1360 -- # local nb 00:08:29.834 06:26:22 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:29.834 06:26:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.834 06:26:22 -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 06:26:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.834 06:26:22 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:29.834 { 00:08:29.834 "aliases": [ 00:08:29.834 "0e15d383-b182-48be-9ccd-523ebd73252a" 00:08:29.834 ], 00:08:29.834 "assigned_rate_limits": { 00:08:29.834 "r_mbytes_per_sec": 0, 00:08:29.834 "rw_ios_per_sec": 0, 00:08:29.834 "rw_mbytes_per_sec": 0, 00:08:29.834 "w_mbytes_per_sec": 0 00:08:29.834 }, 00:08:29.834 "block_size": 512, 00:08:29.834 "claim_type": "exclusive_write", 00:08:29.834 "claimed": true, 00:08:29.834 "driver_specific": {}, 00:08:29.834 "memory_domains": [ 00:08:29.834 { 00:08:29.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.834 "dma_device_type": 2 00:08:29.834 } 00:08:29.834 ], 00:08:29.834 "name": "Malloc1", 00:08:29.834 "num_blocks": 1048576, 00:08:29.834 "product_name": "Malloc disk", 00:08:29.834 "supported_io_types": { 00:08:29.834 "abort": true, 00:08:29.834 "compare": false, 00:08:29.834 "compare_and_write": false, 00:08:29.834 "flush": true, 00:08:29.834 "nvme_admin": false, 00:08:29.834 "nvme_io": false, 00:08:29.834 "read": true, 00:08:29.834 "reset": true, 00:08:29.834 "unmap": true, 00:08:29.834 "write": true, 00:08:29.834 "write_zeroes": true 00:08:29.834 }, 00:08:29.834 "uuid": "0e15d383-b182-48be-9ccd-523ebd73252a", 00:08:29.834 "zoned": false 00:08:29.834 } 00:08:29.834 ]' 00:08:29.834 06:26:22 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:29.834 06:26:22 -- common/autotest_common.sh@1362 -- # bs=512 00:08:29.834 06:26:22 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:29.834 06:26:22 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:29.834 06:26:22 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:29.834 06:26:22 -- common/autotest_common.sh@1367 -- # echo 512 00:08:29.834 06:26:22 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:29.834 06:26:22 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:30.093 06:26:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:30.093 06:26:22 -- common/autotest_common.sh@1177 -- # local i=0 00:08:30.093 06:26:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:30.093 06:26:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:30.093 06:26:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:32.656 06:26:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:32.656 06:26:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:32.656 06:26:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:32.656 06:26:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:32.656 06:26:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:32.656 06:26:24 -- common/autotest_common.sh@1187 -- # return 0 00:08:32.656 06:26:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:32.656 06:26:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:32.656 06:26:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:32.656 06:26:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:32.656 06:26:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:32.656 06:26:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:32.656 06:26:24 -- setup/common.sh@80 -- # echo 536870912 00:08:32.656 06:26:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:32.656 06:26:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:32.657 06:26:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:32.657 06:26:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:32.657 06:26:24 -- target/filesystem.sh@69 -- # partprobe 00:08:32.657 06:26:24 -- target/filesystem.sh@70 -- # sleep 1 00:08:33.224 06:26:25 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:33.224 06:26:25 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:33.224 06:26:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:33.224 06:26:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.224 06:26:25 -- common/autotest_common.sh@10 -- # set +x 00:08:33.224 ************************************ 00:08:33.224 START TEST filesystem_in_capsule_ext4 00:08:33.224 ************************************ 00:08:33.224 06:26:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:33.224 06:26:25 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:33.224 06:26:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.224 06:26:25 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:33.224 06:26:25 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:33.224 06:26:25 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:33.224 06:26:25 -- common/autotest_common.sh@904 -- # local i=0 00:08:33.224 06:26:25 -- common/autotest_common.sh@905 -- # local force 00:08:33.224 06:26:25 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:33.224 06:26:25 -- common/autotest_common.sh@908 -- # force=-F 00:08:33.224 06:26:25 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:33.225 mke2fs 1.47.0 (5-Feb-2023) 00:08:33.483 Discarding device blocks: 0/522240 done 00:08:33.483 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:33.483 Filesystem UUID: 9b641076-ce76-4d6c-b414-8564c07cfc7e 00:08:33.483 Superblock backups stored on blocks: 00:08:33.483 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:33.483 00:08:33.483 Allocating group tables: 0/64 done 00:08:33.483 Writing inode tables: 0/64 done 00:08:33.483 Creating journal (8192 blocks): done 00:08:33.483 Writing superblocks and filesystem accounting information: 0/64 done 00:08:33.483 00:08:33.483 06:26:26 -- common/autotest_common.sh@921 -- # return 0 00:08:33.483 06:26:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.748 06:26:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.748 06:26:31 -- target/filesystem.sh@25 -- # sync 00:08:38.748 06:26:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.748 06:26:31 -- target/filesystem.sh@27 -- # sync 00:08:38.748 06:26:31 -- target/filesystem.sh@29 -- # i=0 00:08:38.748 06:26:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.006 06:26:31 -- target/filesystem.sh@37 -- # kill -0 72552 00:08:39.006 06:26:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.006 06:26:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.006 06:26:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.006 06:26:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.006 00:08:39.006 real 0m5.586s 00:08:39.006 user 0m0.021s 00:08:39.006 sys 0m0.068s 00:08:39.006 06:26:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.006 06:26:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.006 ************************************ 00:08:39.006 END TEST filesystem_in_capsule_ext4 00:08:39.006 ************************************ 00:08:39.006 06:26:31 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:39.006 06:26:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:39.006 06:26:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.006 06:26:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.006 ************************************ 00:08:39.006 START TEST filesystem_in_capsule_btrfs 00:08:39.006 ************************************ 00:08:39.006 06:26:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:39.006 06:26:31 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:39.006 06:26:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.006 06:26:31 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:39.007 06:26:31 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:39.007 06:26:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:39.007 06:26:31 -- common/autotest_common.sh@904 -- # local i=0 00:08:39.007 06:26:31 -- common/autotest_common.sh@905 -- # local force 00:08:39.007 06:26:31 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:39.007 06:26:31 -- common/autotest_common.sh@910 -- # force=-f 00:08:39.007 06:26:31 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:39.007 btrfs-progs v6.8.1 00:08:39.007 See https://btrfs.readthedocs.io for more information. 00:08:39.007 00:08:39.007 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:39.007 NOTE: several default settings have changed in version 5.15, please make sure 00:08:39.007 this does not affect your deployments: 00:08:39.007 - DUP for metadata (-m dup) 00:08:39.007 - enabled no-holes (-O no-holes) 00:08:39.007 - enabled free-space-tree (-R free-space-tree) 00:08:39.007 00:08:39.007 Label: (null) 00:08:39.007 UUID: a7604d46-9732-4fcb-9714-52b5e4489a36 00:08:39.007 Node size: 16384 00:08:39.007 Sector size: 4096 (CPU page size: 4096) 00:08:39.007 Filesystem size: 510.00MiB 00:08:39.007 Block group profiles: 00:08:39.007 Data: single 8.00MiB 00:08:39.007 Metadata: DUP 32.00MiB 00:08:39.007 System: DUP 8.00MiB 00:08:39.007 SSD detected: yes 00:08:39.007 Zoned device: no 00:08:39.007 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:39.007 Checksum: crc32c 00:08:39.007 Number of devices: 1 00:08:39.007 Devices: 00:08:39.007 ID SIZE PATH 00:08:39.007 1 510.00MiB /dev/nvme0n1p1 00:08:39.007 00:08:39.007 06:26:31 -- common/autotest_common.sh@921 -- # return 0 00:08:39.007 06:26:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.007 06:26:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.007 06:26:31 -- target/filesystem.sh@25 -- # sync 00:08:39.265 06:26:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.265 06:26:31 -- target/filesystem.sh@27 -- # sync 00:08:39.265 06:26:31 -- target/filesystem.sh@29 -- # i=0 00:08:39.265 06:26:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.265 06:26:31 -- target/filesystem.sh@37 -- # kill -0 72552 00:08:39.265 06:26:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.265 06:26:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.265 06:26:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.265 06:26:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.265 00:08:39.265 real 0m0.229s 00:08:39.265 user 0m0.023s 00:08:39.265 sys 0m0.064s 00:08:39.265 06:26:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.265 06:26:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.265 ************************************ 00:08:39.265 END TEST filesystem_in_capsule_btrfs 00:08:39.265 ************************************ 00:08:39.265 06:26:31 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:39.265 06:26:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:39.265 06:26:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.265 06:26:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.265 ************************************ 00:08:39.265 START TEST filesystem_in_capsule_xfs 00:08:39.265 ************************************ 00:08:39.265 06:26:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:39.265 06:26:31 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:39.265 06:26:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.265 06:26:31 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:39.265 06:26:31 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:39.265 06:26:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:39.265 06:26:31 -- common/autotest_common.sh@904 -- # local i=0 00:08:39.265 06:26:31 -- common/autotest_common.sh@905 -- # local force 00:08:39.265 06:26:31 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:39.265 06:26:31 -- common/autotest_common.sh@910 -- # force=-f 00:08:39.265 06:26:31 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:39.265 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:39.265 = sectsz=512 attr=2, projid32bit=1 00:08:39.265 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:39.265 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:39.265 data = bsize=4096 blocks=130560, imaxpct=25 00:08:39.265 = sunit=0 swidth=0 blks 00:08:39.265 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:39.265 log =internal log bsize=4096 blocks=16384, version=2 00:08:39.265 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:39.265 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:40.201 Discarding blocks...Done. 00:08:40.201 06:26:32 -- common/autotest_common.sh@921 -- # return 0 00:08:40.201 06:26:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.102 06:26:34 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.102 06:26:34 -- target/filesystem.sh@25 -- # sync 00:08:42.102 06:26:34 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.102 06:26:34 -- target/filesystem.sh@27 -- # sync 00:08:42.102 06:26:34 -- target/filesystem.sh@29 -- # i=0 00:08:42.102 06:26:34 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.102 06:26:34 -- target/filesystem.sh@37 -- # kill -0 72552 00:08:42.102 06:26:34 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.102 06:26:34 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.102 06:26:34 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.102 06:26:34 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.102 00:08:42.102 real 0m2.690s 00:08:42.102 user 0m0.026s 00:08:42.102 sys 0m0.051s 00:08:42.102 06:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.102 06:26:34 -- common/autotest_common.sh@10 -- # set +x 00:08:42.102 ************************************ 00:08:42.102 END TEST filesystem_in_capsule_xfs 00:08:42.102 ************************************ 00:08:42.102 06:26:34 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:42.102 06:26:34 -- target/filesystem.sh@93 -- # sync 00:08:42.102 06:26:34 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.102 06:26:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.102 06:26:34 -- common/autotest_common.sh@1198 -- # local i=0 00:08:42.102 06:26:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:42.102 06:26:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.102 06:26:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:42.102 06:26:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.102 06:26:34 -- common/autotest_common.sh@1210 -- # return 0 00:08:42.102 06:26:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.102 06:26:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:42.102 06:26:34 -- common/autotest_common.sh@10 -- # set +x 00:08:42.102 06:26:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:42.102 06:26:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:42.102 06:26:34 -- target/filesystem.sh@101 -- # killprocess 72552 00:08:42.102 06:26:34 -- common/autotest_common.sh@926 -- # '[' -z 72552 ']' 00:08:42.102 06:26:34 -- common/autotest_common.sh@930 -- # kill -0 72552 00:08:42.102 06:26:34 -- common/autotest_common.sh@931 -- # uname 00:08:42.102 06:26:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:42.102 06:26:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72552 00:08:42.102 06:26:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:42.102 06:26:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:42.102 killing process with pid 72552 00:08:42.102 06:26:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72552' 00:08:42.102 06:26:34 -- common/autotest_common.sh@945 -- # kill 72552 00:08:42.102 06:26:34 -- common/autotest_common.sh@950 -- # wait 72552 00:08:42.669 06:26:35 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:42.669 00:08:42.669 real 0m14.140s 00:08:42.669 user 0m54.634s 00:08:42.669 sys 0m1.594s 00:08:42.669 06:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.669 06:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.669 ************************************ 00:08:42.669 END TEST nvmf_filesystem_in_capsule 00:08:42.669 ************************************ 00:08:42.669 06:26:35 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:42.669 06:26:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.669 06:26:35 -- nvmf/common.sh@116 -- # sync 00:08:42.669 06:26:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.669 06:26:35 -- nvmf/common.sh@119 -- # set +e 00:08:42.669 06:26:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.669 06:26:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.669 rmmod nvme_tcp 00:08:42.669 rmmod nvme_fabrics 00:08:42.669 rmmod nvme_keyring 00:08:42.669 06:26:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:42.669 06:26:35 -- nvmf/common.sh@123 -- # set -e 00:08:42.669 06:26:35 -- nvmf/common.sh@124 -- # return 0 00:08:42.669 06:26:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:42.669 06:26:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.669 06:26:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.669 06:26:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.669 06:26:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.669 06:26:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.669 06:26:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.669 06:26:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.669 06:26:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.669 06:26:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:42.669 00:08:42.669 real 0m29.921s 00:08:42.669 user 1m52.414s 00:08:42.669 sys 0m3.823s 00:08:42.669 06:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.669 ************************************ 00:08:42.669 END TEST nvmf_filesystem 00:08:42.669 ************************************ 00:08:42.669 06:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.928 06:26:35 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:42.928 06:26:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:42.928 06:26:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.928 06:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.928 ************************************ 00:08:42.928 START TEST nvmf_discovery 00:08:42.928 ************************************ 00:08:42.928 06:26:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:42.928 * Looking for test storage... 00:08:42.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:42.928 06:26:35 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.928 06:26:35 -- nvmf/common.sh@7 -- # uname -s 00:08:42.928 06:26:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.928 06:26:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.928 06:26:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.928 06:26:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.928 06:26:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.928 06:26:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.928 06:26:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.928 06:26:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.928 06:26:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.928 06:26:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.928 06:26:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:42.928 06:26:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:42.928 06:26:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.928 06:26:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.928 06:26:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:42.928 06:26:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.928 06:26:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.928 06:26:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.928 06:26:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.928 06:26:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.928 06:26:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.928 06:26:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.928 06:26:35 -- paths/export.sh@5 -- # export PATH 00:08:42.928 06:26:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.928 06:26:35 -- nvmf/common.sh@46 -- # : 0 00:08:42.928 06:26:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:42.928 06:26:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:42.928 06:26:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:42.928 06:26:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.928 06:26:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.928 06:26:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:42.928 06:26:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:42.928 06:26:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:42.928 06:26:35 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:42.928 06:26:35 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:42.928 06:26:35 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:42.929 06:26:35 -- target/discovery.sh@15 -- # hash nvme 00:08:42.929 06:26:35 -- target/discovery.sh@20 -- # nvmftestinit 00:08:42.929 06:26:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:42.929 06:26:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.929 06:26:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:42.929 06:26:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:42.929 06:26:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:42.929 06:26:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.929 06:26:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.929 06:26:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.929 06:26:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:42.929 06:26:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:42.929 06:26:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:42.929 06:26:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:42.929 06:26:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:42.929 06:26:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:42.929 06:26:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.929 06:26:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.929 06:26:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:42.929 06:26:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:42.929 06:26:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.929 06:26:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.929 06:26:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.929 06:26:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.929 06:26:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.929 06:26:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.929 06:26:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.929 06:26:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.929 06:26:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:42.929 06:26:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:42.929 Cannot find device "nvmf_tgt_br" 00:08:42.929 06:26:35 -- nvmf/common.sh@154 -- # true 00:08:42.929 06:26:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.929 Cannot find device "nvmf_tgt_br2" 00:08:42.929 06:26:35 -- nvmf/common.sh@155 -- # true 00:08:42.929 06:26:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:42.929 06:26:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:42.929 Cannot find device "nvmf_tgt_br" 00:08:42.929 06:26:35 -- nvmf/common.sh@157 -- # true 00:08:42.929 06:26:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:42.929 Cannot find device "nvmf_tgt_br2" 00:08:42.929 06:26:35 -- nvmf/common.sh@158 -- # true 00:08:42.929 06:26:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:43.187 06:26:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:43.187 06:26:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.187 06:26:35 -- nvmf/common.sh@161 -- # true 00:08:43.187 06:26:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.187 06:26:35 -- nvmf/common.sh@162 -- # true 00:08:43.187 06:26:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.187 06:26:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.187 06:26:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.187 06:26:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.187 06:26:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.187 06:26:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.187 06:26:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.187 06:26:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:43.187 06:26:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:43.187 06:26:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:43.187 06:26:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:43.187 06:26:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:43.187 06:26:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:43.187 06:26:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.187 06:26:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.187 06:26:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.187 06:26:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:43.187 06:26:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:43.187 06:26:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.187 06:26:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.187 06:26:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.187 06:26:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.187 06:26:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.187 06:26:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:43.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:08:43.187 00:08:43.187 --- 10.0.0.2 ping statistics --- 00:08:43.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.187 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:43.187 06:26:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:43.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:43.187 00:08:43.187 --- 10.0.0.3 ping statistics --- 00:08:43.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.187 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:43.187 06:26:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:43.187 00:08:43.187 --- 10.0.0.1 ping statistics --- 00:08:43.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.187 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:43.187 06:26:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.187 06:26:35 -- nvmf/common.sh@421 -- # return 0 00:08:43.187 06:26:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:43.187 06:26:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.187 06:26:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:43.187 06:26:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:43.187 06:26:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.187 06:26:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:43.187 06:26:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:43.187 06:26:35 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:43.187 06:26:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.187 06:26:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:43.187 06:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:43.187 06:26:35 -- nvmf/common.sh@469 -- # nvmfpid=73091 00:08:43.187 06:26:35 -- nvmf/common.sh@470 -- # waitforlisten 73091 00:08:43.187 06:26:35 -- common/autotest_common.sh@819 -- # '[' -z 73091 ']' 00:08:43.187 06:26:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.187 06:26:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.187 06:26:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:43.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.187 06:26:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.187 06:26:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:43.187 06:26:35 -- common/autotest_common.sh@10 -- # set +x 00:08:43.446 [2024-10-04 06:26:35.880460] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:08:43.446 [2024-10-04 06:26:35.880553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.446 [2024-10-04 06:26:36.013371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.446 [2024-10-04 06:26:36.091447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.446 [2024-10-04 06:26:36.091637] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.446 [2024-10-04 06:26:36.091652] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.446 [2024-10-04 06:26:36.091662] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.446 [2024-10-04 06:26:36.091788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.446 [2024-10-04 06:26:36.092073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.446 [2024-10-04 06:26:36.092392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.446 [2024-10-04 06:26:36.092427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.381 06:26:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:44.381 06:26:36 -- common/autotest_common.sh@852 -- # return 0 00:08:44.381 06:26:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:44.381 06:26:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 06:26:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.381 06:26:36 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 [2024-10-04 06:26:36.860437] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@26 -- # seq 1 4 00:08:44.381 06:26:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:44.381 06:26:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 Null1 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 [2024-10-04 06:26:36.921110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:44.381 06:26:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 Null2 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.381 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.381 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.381 06:26:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:44.381 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:44.382 06:26:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:44.382 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 Null3 00:08:44.382 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:44.382 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:44.382 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:44.382 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:44.382 06:26:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:44.382 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 Null4 00:08:44.382 06:26:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:44.382 06:26:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:36 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:37 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:44.382 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:37 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:44.382 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:37 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.382 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:37 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:44.382 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.382 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.382 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.382 06:26:37 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 4420 00:08:44.641 00:08:44.641 Discovery Log Number of Records 6, Generation counter 6 00:08:44.641 =====Discovery Log Entry 0====== 00:08:44.641 trtype: tcp 00:08:44.641 adrfam: ipv4 00:08:44.641 subtype: current discovery subsystem 00:08:44.641 treq: not required 00:08:44.641 portid: 0 00:08:44.641 trsvcid: 4420 00:08:44.641 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:44.641 traddr: 10.0.0.2 00:08:44.641 eflags: explicit discovery connections, duplicate discovery information 00:08:44.641 sectype: none 00:08:44.641 =====Discovery Log Entry 1====== 00:08:44.641 trtype: tcp 00:08:44.641 adrfam: ipv4 00:08:44.641 subtype: nvme subsystem 00:08:44.641 treq: not required 00:08:44.641 portid: 0 00:08:44.641 trsvcid: 4420 00:08:44.641 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:44.641 traddr: 10.0.0.2 00:08:44.641 eflags: none 00:08:44.641 sectype: none 00:08:44.641 =====Discovery Log Entry 2====== 00:08:44.641 trtype: tcp 00:08:44.641 adrfam: ipv4 00:08:44.641 subtype: nvme subsystem 00:08:44.641 treq: not required 00:08:44.641 portid: 0 00:08:44.641 trsvcid: 4420 00:08:44.641 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:44.641 traddr: 10.0.0.2 00:08:44.641 eflags: none 00:08:44.641 sectype: none 00:08:44.641 =====Discovery Log Entry 3====== 00:08:44.641 trtype: tcp 00:08:44.641 adrfam: ipv4 00:08:44.641 subtype: nvme subsystem 00:08:44.641 treq: not required 00:08:44.641 portid: 0 00:08:44.641 trsvcid: 4420 00:08:44.641 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:44.641 traddr: 10.0.0.2 00:08:44.641 eflags: none 00:08:44.641 sectype: none 00:08:44.641 =====Discovery Log Entry 4====== 00:08:44.641 trtype: tcp 00:08:44.641 adrfam: ipv4 00:08:44.641 subtype: nvme subsystem 00:08:44.641 treq: not required 00:08:44.641 portid: 0 00:08:44.641 trsvcid: 4420 00:08:44.641 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:44.641 traddr: 10.0.0.2 00:08:44.641 eflags: none 00:08:44.641 sectype: none 00:08:44.641 =====Discovery Log Entry 5====== 00:08:44.641 trtype: tcp 00:08:44.641 adrfam: ipv4 00:08:44.641 subtype: discovery subsystem referral 00:08:44.641 treq: not required 00:08:44.641 portid: 0 00:08:44.641 trsvcid: 4430 00:08:44.641 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:44.641 traddr: 10.0.0.2 00:08:44.641 eflags: none 00:08:44.641 sectype: none 00:08:44.641 06:26:37 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:44.641 Perform nvmf subsystem discovery via RPC 00:08:44.641 06:26:37 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:44.641 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.641 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.641 [2024-10-04 06:26:37.157277] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:44.641 [ 00:08:44.641 { 00:08:44.641 "allow_any_host": true, 00:08:44.641 "hosts": [], 00:08:44.641 "listen_addresses": [ 00:08:44.641 { 00:08:44.641 "adrfam": "IPv4", 00:08:44.641 "traddr": "10.0.0.2", 00:08:44.641 "transport": "TCP", 00:08:44.641 "trsvcid": "4420", 00:08:44.641 "trtype": "TCP" 00:08:44.641 } 00:08:44.641 ], 00:08:44.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:44.641 "subtype": "Discovery" 00:08:44.641 }, 00:08:44.641 { 00:08:44.641 "allow_any_host": true, 00:08:44.641 "hosts": [], 00:08:44.641 "listen_addresses": [ 00:08:44.641 { 00:08:44.641 "adrfam": "IPv4", 00:08:44.641 "traddr": "10.0.0.2", 00:08:44.641 "transport": "TCP", 00:08:44.641 "trsvcid": "4420", 00:08:44.641 "trtype": "TCP" 00:08:44.641 } 00:08:44.641 ], 00:08:44.641 "max_cntlid": 65519, 00:08:44.641 "max_namespaces": 32, 00:08:44.641 "min_cntlid": 1, 00:08:44.641 "model_number": "SPDK bdev Controller", 00:08:44.641 "namespaces": [ 00:08:44.641 { 00:08:44.642 "bdev_name": "Null1", 00:08:44.642 "name": "Null1", 00:08:44.642 "nguid": "456F132D7928415CAE1774A64E39BA04", 00:08:44.642 "nsid": 1, 00:08:44.642 "uuid": "456f132d-7928-415c-ae17-74a64e39ba04" 00:08:44.642 } 00:08:44.642 ], 00:08:44.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:44.642 "serial_number": "SPDK00000000000001", 00:08:44.642 "subtype": "NVMe" 00:08:44.642 }, 00:08:44.642 { 00:08:44.642 "allow_any_host": true, 00:08:44.642 "hosts": [], 00:08:44.642 "listen_addresses": [ 00:08:44.642 { 00:08:44.642 "adrfam": "IPv4", 00:08:44.642 "traddr": "10.0.0.2", 00:08:44.642 "transport": "TCP", 00:08:44.642 "trsvcid": "4420", 00:08:44.642 "trtype": "TCP" 00:08:44.642 } 00:08:44.642 ], 00:08:44.642 "max_cntlid": 65519, 00:08:44.642 "max_namespaces": 32, 00:08:44.642 "min_cntlid": 1, 00:08:44.642 "model_number": "SPDK bdev Controller", 00:08:44.642 "namespaces": [ 00:08:44.642 { 00:08:44.642 "bdev_name": "Null2", 00:08:44.642 "name": "Null2", 00:08:44.642 "nguid": "F304006C1296464BA14BB4311088D10B", 00:08:44.642 "nsid": 1, 00:08:44.642 "uuid": "f304006c-1296-464b-a14b-b4311088d10b" 00:08:44.642 } 00:08:44.642 ], 00:08:44.642 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:44.642 "serial_number": "SPDK00000000000002", 00:08:44.642 "subtype": "NVMe" 00:08:44.642 }, 00:08:44.642 { 00:08:44.642 "allow_any_host": true, 00:08:44.642 "hosts": [], 00:08:44.642 "listen_addresses": [ 00:08:44.642 { 00:08:44.642 "adrfam": "IPv4", 00:08:44.642 "traddr": "10.0.0.2", 00:08:44.642 "transport": "TCP", 00:08:44.642 "trsvcid": "4420", 00:08:44.642 "trtype": "TCP" 00:08:44.642 } 00:08:44.642 ], 00:08:44.642 "max_cntlid": 65519, 00:08:44.642 "max_namespaces": 32, 00:08:44.642 "min_cntlid": 1, 00:08:44.642 "model_number": "SPDK bdev Controller", 00:08:44.642 "namespaces": [ 00:08:44.642 { 00:08:44.642 "bdev_name": "Null3", 00:08:44.642 "name": "Null3", 00:08:44.642 "nguid": "A4863C095F7C40F6B272A1ECF2549359", 00:08:44.642 "nsid": 1, 00:08:44.642 "uuid": "a4863c09-5f7c-40f6-b272-a1ecf2549359" 00:08:44.642 } 00:08:44.642 ], 00:08:44.642 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:44.642 "serial_number": "SPDK00000000000003", 00:08:44.642 "subtype": "NVMe" 00:08:44.642 }, 00:08:44.642 { 00:08:44.642 "allow_any_host": true, 00:08:44.642 "hosts": [], 00:08:44.642 "listen_addresses": [ 00:08:44.642 { 00:08:44.642 "adrfam": "IPv4", 00:08:44.642 "traddr": "10.0.0.2", 00:08:44.642 "transport": "TCP", 00:08:44.642 "trsvcid": "4420", 00:08:44.642 "trtype": "TCP" 00:08:44.642 } 00:08:44.642 ], 00:08:44.642 "max_cntlid": 65519, 00:08:44.642 "max_namespaces": 32, 00:08:44.642 "min_cntlid": 1, 00:08:44.642 "model_number": "SPDK bdev Controller", 00:08:44.642 "namespaces": [ 00:08:44.642 { 00:08:44.642 "bdev_name": "Null4", 00:08:44.642 "name": "Null4", 00:08:44.642 "nguid": "AC3236903138431285610C774D5AE471", 00:08:44.642 "nsid": 1, 00:08:44.642 "uuid": "ac323690-3138-4312-8561-0c774d5ae471" 00:08:44.642 } 00:08:44.642 ], 00:08:44.642 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:44.642 "serial_number": "SPDK00000000000004", 00:08:44.642 "subtype": "NVMe" 00:08:44.642 } 00:08:44.642 ] 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@42 -- # seq 1 4 00:08:44.642 06:26:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:44.642 06:26:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:44.642 06:26:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:44.642 06:26:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:44.642 06:26:37 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:44.642 06:26:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.642 06:26:37 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:44.642 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:44.642 06:26:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.642 06:26:37 -- target/discovery.sh@49 -- # check_bdevs= 00:08:44.642 06:26:37 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:44.642 06:26:37 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:44.642 06:26:37 -- target/discovery.sh@57 -- # nvmftestfini 00:08:44.642 06:26:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:44.642 06:26:37 -- nvmf/common.sh@116 -- # sync 00:08:44.901 06:26:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:44.901 06:26:37 -- nvmf/common.sh@119 -- # set +e 00:08:44.901 06:26:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:44.901 06:26:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:44.901 rmmod nvme_tcp 00:08:44.901 rmmod nvme_fabrics 00:08:44.901 rmmod nvme_keyring 00:08:44.901 06:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:44.901 06:26:37 -- nvmf/common.sh@123 -- # set -e 00:08:44.901 06:26:37 -- nvmf/common.sh@124 -- # return 0 00:08:44.901 06:26:37 -- nvmf/common.sh@477 -- # '[' -n 73091 ']' 00:08:44.901 06:26:37 -- nvmf/common.sh@478 -- # killprocess 73091 00:08:44.901 06:26:37 -- common/autotest_common.sh@926 -- # '[' -z 73091 ']' 00:08:44.901 06:26:37 -- common/autotest_common.sh@930 -- # kill -0 73091 00:08:44.901 06:26:37 -- common/autotest_common.sh@931 -- # uname 00:08:44.901 06:26:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:44.901 06:26:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73091 00:08:44.901 06:26:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:44.901 06:26:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:44.901 killing process with pid 73091 00:08:44.901 06:26:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73091' 00:08:44.901 06:26:37 -- common/autotest_common.sh@945 -- # kill 73091 00:08:44.901 [2024-10-04 06:26:37.444957] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:44.901 06:26:37 -- common/autotest_common.sh@950 -- # wait 73091 00:08:45.160 06:26:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:45.160 06:26:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:45.160 06:26:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:45.160 06:26:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.160 06:26:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:45.160 06:26:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.160 06:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.160 06:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.160 06:26:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:45.160 00:08:45.160 real 0m2.358s 00:08:45.160 user 0m6.616s 00:08:45.160 sys 0m0.630s 00:08:45.160 06:26:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.160 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:45.160 ************************************ 00:08:45.160 END TEST nvmf_discovery 00:08:45.160 ************************************ 00:08:45.160 06:26:37 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:45.160 06:26:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:45.160 06:26:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.160 06:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:45.160 ************************************ 00:08:45.160 START TEST nvmf_referrals 00:08:45.160 ************************************ 00:08:45.160 06:26:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:45.419 * Looking for test storage... 00:08:45.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.419 06:26:37 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.419 06:26:37 -- nvmf/common.sh@7 -- # uname -s 00:08:45.419 06:26:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.419 06:26:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.419 06:26:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.419 06:26:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.419 06:26:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.419 06:26:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.419 06:26:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.419 06:26:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.419 06:26:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.419 06:26:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.419 06:26:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:45.419 06:26:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:45.419 06:26:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.419 06:26:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.419 06:26:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.419 06:26:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.419 06:26:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.419 06:26:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.419 06:26:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.419 06:26:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.419 06:26:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.419 06:26:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.419 06:26:37 -- paths/export.sh@5 -- # export PATH 00:08:45.419 06:26:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.419 06:26:37 -- nvmf/common.sh@46 -- # : 0 00:08:45.419 06:26:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:45.419 06:26:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:45.419 06:26:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:45.419 06:26:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.419 06:26:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.419 06:26:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:45.419 06:26:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:45.419 06:26:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:45.419 06:26:37 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:45.419 06:26:37 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:45.419 06:26:37 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:45.419 06:26:37 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:45.419 06:26:37 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:45.419 06:26:37 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:45.419 06:26:37 -- target/referrals.sh@37 -- # nvmftestinit 00:08:45.419 06:26:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:45.419 06:26:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.419 06:26:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:45.419 06:26:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:45.419 06:26:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:45.419 06:26:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.419 06:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.419 06:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.419 06:26:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:45.419 06:26:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:45.419 06:26:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:45.419 06:26:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:45.419 06:26:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:45.419 06:26:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:45.419 06:26:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.419 06:26:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.419 06:26:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:45.419 06:26:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:45.419 06:26:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.419 06:26:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.419 06:26:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.419 06:26:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.419 06:26:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.419 06:26:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.419 06:26:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.419 06:26:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.419 06:26:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:45.419 06:26:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:45.419 Cannot find device "nvmf_tgt_br" 00:08:45.419 06:26:37 -- nvmf/common.sh@154 -- # true 00:08:45.419 06:26:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.419 Cannot find device "nvmf_tgt_br2" 00:08:45.419 06:26:37 -- nvmf/common.sh@155 -- # true 00:08:45.419 06:26:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:45.419 06:26:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:45.419 Cannot find device "nvmf_tgt_br" 00:08:45.419 06:26:37 -- nvmf/common.sh@157 -- # true 00:08:45.419 06:26:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:45.419 Cannot find device "nvmf_tgt_br2" 00:08:45.419 06:26:37 -- nvmf/common.sh@158 -- # true 00:08:45.419 06:26:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:45.419 06:26:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:45.419 06:26:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.419 06:26:38 -- nvmf/common.sh@161 -- # true 00:08:45.419 06:26:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.419 06:26:38 -- nvmf/common.sh@162 -- # true 00:08:45.419 06:26:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.419 06:26:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.419 06:26:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.419 06:26:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.419 06:26:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.419 06:26:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.419 06:26:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.419 06:26:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:45.419 06:26:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:45.678 06:26:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:45.678 06:26:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:45.678 06:26:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:45.678 06:26:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:45.678 06:26:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.678 06:26:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.678 06:26:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.678 06:26:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:45.678 06:26:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:45.678 06:26:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.678 06:26:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.678 06:26:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.678 06:26:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.678 06:26:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.678 06:26:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:45.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:45.678 00:08:45.678 --- 10.0.0.2 ping statistics --- 00:08:45.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.678 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:45.678 06:26:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:45.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:08:45.678 00:08:45.678 --- 10.0.0.3 ping statistics --- 00:08:45.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.678 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:45.678 06:26:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:08:45.678 00:08:45.678 --- 10.0.0.1 ping statistics --- 00:08:45.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.678 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:08:45.678 06:26:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.678 06:26:38 -- nvmf/common.sh@421 -- # return 0 00:08:45.678 06:26:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:45.678 06:26:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.678 06:26:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:45.678 06:26:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:45.678 06:26:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.678 06:26:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:45.678 06:26:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:45.678 06:26:38 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:45.678 06:26:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:45.678 06:26:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:45.678 06:26:38 -- common/autotest_common.sh@10 -- # set +x 00:08:45.678 06:26:38 -- nvmf/common.sh@469 -- # nvmfpid=73314 00:08:45.678 06:26:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.678 06:26:38 -- nvmf/common.sh@470 -- # waitforlisten 73314 00:08:45.678 06:26:38 -- common/autotest_common.sh@819 -- # '[' -z 73314 ']' 00:08:45.678 06:26:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.678 06:26:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.678 06:26:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.678 06:26:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.678 06:26:38 -- common/autotest_common.sh@10 -- # set +x 00:08:45.678 [2024-10-04 06:26:38.284771] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:08:45.678 [2024-10-04 06:26:38.284895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.936 [2024-10-04 06:26:38.422304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.936 [2024-10-04 06:26:38.505612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.936 [2024-10-04 06:26:38.505744] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.936 [2024-10-04 06:26:38.505756] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.936 [2024-10-04 06:26:38.505764] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.936 [2024-10-04 06:26:38.506391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.936 [2024-10-04 06:26:38.506604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.937 [2024-10-04 06:26:38.507300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.937 [2024-10-04 06:26:38.507318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.873 06:26:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.873 06:26:39 -- common/autotest_common.sh@852 -- # return 0 00:08:46.873 06:26:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.873 06:26:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.873 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.873 06:26:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.873 06:26:39 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.873 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.873 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.873 [2024-10-04 06:26:39.347289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.873 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.873 06:26:39 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:46.873 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.873 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.873 [2024-10-04 06:26:39.376885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:46.873 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.873 06:26:39 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:46.873 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.873 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.873 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.873 06:26:39 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:46.873 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.873 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.873 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.873 06:26:39 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:46.873 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.873 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.873 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.873 06:26:39 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.873 06:26:39 -- target/referrals.sh@48 -- # jq length 00:08:46.873 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.873 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.874 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.874 06:26:39 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:46.874 06:26:39 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:46.874 06:26:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:46.874 06:26:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:46.874 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.874 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.874 06:26:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:46.874 06:26:39 -- target/referrals.sh@21 -- # sort 00:08:46.874 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.874 06:26:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:46.874 06:26:39 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:46.874 06:26:39 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:46.874 06:26:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:46.874 06:26:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:46.874 06:26:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:46.874 06:26:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:46.874 06:26:39 -- target/referrals.sh@26 -- # sort 00:08:47.133 06:26:39 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:47.133 06:26:39 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:47.133 06:26:39 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:47.133 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.133 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.133 06:26:39 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:47.133 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.133 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.133 06:26:39 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:47.133 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.133 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.133 06:26:39 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.133 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.133 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 06:26:39 -- target/referrals.sh@56 -- # jq length 00:08:47.133 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.133 06:26:39 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:47.133 06:26:39 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:47.133 06:26:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.133 06:26:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.133 06:26:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.133 06:26:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.133 06:26:39 -- target/referrals.sh@26 -- # sort 00:08:47.392 06:26:39 -- target/referrals.sh@26 -- # echo 00:08:47.392 06:26:39 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:47.392 06:26:39 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:47.392 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.392 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.392 06:26:39 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.392 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.392 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.392 06:26:39 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:47.392 06:26:39 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.392 06:26:39 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.392 06:26:39 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.392 06:26:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.392 06:26:39 -- target/referrals.sh@21 -- # sort 00:08:47.392 06:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 06:26:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.392 06:26:39 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:47.392 06:26:39 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.392 06:26:39 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:47.392 06:26:39 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.392 06:26:39 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.392 06:26:39 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.392 06:26:39 -- target/referrals.sh@26 -- # sort 00:08:47.392 06:26:39 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.651 06:26:40 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:47.651 06:26:40 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:47.651 06:26:40 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:47.651 06:26:40 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:47.651 06:26:40 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.651 06:26:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.651 06:26:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:47.651 06:26:40 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:47.651 06:26:40 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:47.651 06:26:40 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:47.651 06:26:40 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:47.651 06:26:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.651 06:26:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:47.651 06:26:40 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:47.651 06:26:40 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:47.651 06:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.651 06:26:40 -- common/autotest_common.sh@10 -- # set +x 00:08:47.651 06:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.651 06:26:40 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:47.651 06:26:40 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:47.651 06:26:40 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:47.651 06:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.651 06:26:40 -- common/autotest_common.sh@10 -- # set +x 00:08:47.910 06:26:40 -- target/referrals.sh@21 -- # sort 00:08:47.910 06:26:40 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:47.910 06:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.910 06:26:40 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:47.910 06:26:40 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.910 06:26:40 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:47.910 06:26:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:47.910 06:26:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:47.910 06:26:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:47.910 06:26:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.910 06:26:40 -- target/referrals.sh@26 -- # sort 00:08:47.910 06:26:40 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:47.910 06:26:40 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:47.910 06:26:40 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:47.910 06:26:40 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:47.910 06:26:40 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:47.910 06:26:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:47.910 06:26:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:48.169 06:26:40 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:48.169 06:26:40 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:48.169 06:26:40 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:48.169 06:26:40 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:48.169 06:26:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.169 06:26:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:48.169 06:26:40 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:48.169 06:26:40 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:48.169 06:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.169 06:26:40 -- common/autotest_common.sh@10 -- # set +x 00:08:48.169 06:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.169 06:26:40 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:48.169 06:26:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.169 06:26:40 -- common/autotest_common.sh@10 -- # set +x 00:08:48.169 06:26:40 -- target/referrals.sh@82 -- # jq length 00:08:48.169 06:26:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.169 06:26:40 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:48.169 06:26:40 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:48.169 06:26:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:48.169 06:26:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:48.169 06:26:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:48.170 06:26:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:48.170 06:26:40 -- target/referrals.sh@26 -- # sort 00:08:48.429 06:26:40 -- target/referrals.sh@26 -- # echo 00:08:48.429 06:26:40 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:48.429 06:26:40 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:48.429 06:26:40 -- target/referrals.sh@86 -- # nvmftestfini 00:08:48.429 06:26:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:48.429 06:26:40 -- nvmf/common.sh@116 -- # sync 00:08:48.429 06:26:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:48.429 06:26:41 -- nvmf/common.sh@119 -- # set +e 00:08:48.429 06:26:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:48.429 06:26:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:48.429 rmmod nvme_tcp 00:08:48.429 rmmod nvme_fabrics 00:08:48.429 rmmod nvme_keyring 00:08:48.429 06:26:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:48.429 06:26:41 -- nvmf/common.sh@123 -- # set -e 00:08:48.429 06:26:41 -- nvmf/common.sh@124 -- # return 0 00:08:48.429 06:26:41 -- nvmf/common.sh@477 -- # '[' -n 73314 ']' 00:08:48.429 06:26:41 -- nvmf/common.sh@478 -- # killprocess 73314 00:08:48.429 06:26:41 -- common/autotest_common.sh@926 -- # '[' -z 73314 ']' 00:08:48.429 06:26:41 -- common/autotest_common.sh@930 -- # kill -0 73314 00:08:48.429 06:26:41 -- common/autotest_common.sh@931 -- # uname 00:08:48.429 06:26:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:48.429 06:26:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73314 00:08:48.429 06:26:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:48.429 06:26:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:48.429 killing process with pid 73314 00:08:48.429 06:26:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73314' 00:08:48.429 06:26:41 -- common/autotest_common.sh@945 -- # kill 73314 00:08:48.429 06:26:41 -- common/autotest_common.sh@950 -- # wait 73314 00:08:48.688 06:26:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.688 06:26:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:48.688 06:26:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:48.688 06:26:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.688 06:26:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:48.688 06:26:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.688 06:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.688 06:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.946 06:26:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:48.946 00:08:48.946 real 0m3.598s 00:08:48.946 user 0m12.320s 00:08:48.946 sys 0m0.915s 00:08:48.946 06:26:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.946 ************************************ 00:08:48.946 06:26:41 -- common/autotest_common.sh@10 -- # set +x 00:08:48.946 END TEST nvmf_referrals 00:08:48.946 ************************************ 00:08:48.946 06:26:41 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:48.946 06:26:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:48.946 06:26:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.946 06:26:41 -- common/autotest_common.sh@10 -- # set +x 00:08:48.946 ************************************ 00:08:48.946 START TEST nvmf_connect_disconnect 00:08:48.946 ************************************ 00:08:48.947 06:26:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:48.947 * Looking for test storage... 00:08:48.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:48.947 06:26:41 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:48.947 06:26:41 -- nvmf/common.sh@7 -- # uname -s 00:08:48.947 06:26:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.947 06:26:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.947 06:26:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.947 06:26:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.947 06:26:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.947 06:26:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.947 06:26:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.947 06:26:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.947 06:26:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.947 06:26:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.947 06:26:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:48.947 06:26:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:08:48.947 06:26:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.947 06:26:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.947 06:26:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:48.947 06:26:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:48.947 06:26:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.947 06:26:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.947 06:26:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.947 06:26:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.947 06:26:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.947 06:26:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.947 06:26:41 -- paths/export.sh@5 -- # export PATH 00:08:48.947 06:26:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.947 06:26:41 -- nvmf/common.sh@46 -- # : 0 00:08:48.947 06:26:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:48.947 06:26:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:48.947 06:26:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:48.947 06:26:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.947 06:26:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.947 06:26:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:48.947 06:26:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:48.947 06:26:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:48.947 06:26:41 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.947 06:26:41 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.947 06:26:41 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:48.947 06:26:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:48.947 06:26:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.947 06:26:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:48.947 06:26:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:48.947 06:26:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:48.947 06:26:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.947 06:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.947 06:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.947 06:26:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:48.947 06:26:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:48.947 06:26:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:48.947 06:26:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:48.947 06:26:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:48.947 06:26:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:48.947 06:26:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.947 06:26:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.947 06:26:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:48.947 06:26:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:48.947 06:26:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:48.947 06:26:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:48.947 06:26:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:48.947 06:26:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.947 06:26:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:48.947 06:26:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:48.947 06:26:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:48.947 06:26:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:48.947 06:26:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:48.947 06:26:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:48.947 Cannot find device "nvmf_tgt_br" 00:08:48.947 06:26:41 -- nvmf/common.sh@154 -- # true 00:08:48.947 06:26:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.947 Cannot find device "nvmf_tgt_br2" 00:08:48.947 06:26:41 -- nvmf/common.sh@155 -- # true 00:08:48.947 06:26:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:48.947 06:26:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:48.947 Cannot find device "nvmf_tgt_br" 00:08:48.947 06:26:41 -- nvmf/common.sh@157 -- # true 00:08:48.947 06:26:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:49.206 Cannot find device "nvmf_tgt_br2" 00:08:49.206 06:26:41 -- nvmf/common.sh@158 -- # true 00:08:49.206 06:26:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:49.206 06:26:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:49.206 06:26:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.206 06:26:41 -- nvmf/common.sh@161 -- # true 00:08:49.206 06:26:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.206 06:26:41 -- nvmf/common.sh@162 -- # true 00:08:49.206 06:26:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.206 06:26:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.206 06:26:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.206 06:26:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.206 06:26:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.206 06:26:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.206 06:26:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.206 06:26:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:49.206 06:26:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:49.206 06:26:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:49.206 06:26:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:49.206 06:26:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:49.206 06:26:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:49.206 06:26:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.206 06:26:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.206 06:26:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.206 06:26:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:49.206 06:26:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:49.206 06:26:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.206 06:26:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.206 06:26:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.206 06:26:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.206 06:26:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.206 06:26:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:49.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:08:49.206 00:08:49.206 --- 10.0.0.2 ping statistics --- 00:08:49.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.206 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:49.206 06:26:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:49.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:08:49.468 00:08:49.468 --- 10.0.0.3 ping statistics --- 00:08:49.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.468 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:49.468 06:26:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:49.468 00:08:49.468 --- 10.0.0.1 ping statistics --- 00:08:49.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.468 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:49.468 06:26:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.468 06:26:41 -- nvmf/common.sh@421 -- # return 0 00:08:49.468 06:26:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:49.468 06:26:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.468 06:26:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:49.468 06:26:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:49.468 06:26:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.468 06:26:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:49.468 06:26:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:49.468 06:26:41 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:49.468 06:26:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:49.468 06:26:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:49.468 06:26:41 -- common/autotest_common.sh@10 -- # set +x 00:08:49.468 06:26:41 -- nvmf/common.sh@469 -- # nvmfpid=73621 00:08:49.468 06:26:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.468 06:26:41 -- nvmf/common.sh@470 -- # waitforlisten 73621 00:08:49.468 06:26:41 -- common/autotest_common.sh@819 -- # '[' -z 73621 ']' 00:08:49.468 06:26:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.468 06:26:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:49.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.468 06:26:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.468 06:26:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:49.468 06:26:41 -- common/autotest_common.sh@10 -- # set +x 00:08:49.468 [2024-10-04 06:26:41.985865] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:08:49.468 [2024-10-04 06:26:41.985970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.468 [2024-10-04 06:26:42.130281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.767 [2024-10-04 06:26:42.218280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.767 [2024-10-04 06:26:42.218422] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.767 [2024-10-04 06:26:42.218434] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.767 [2024-10-04 06:26:42.218442] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.767 [2024-10-04 06:26:42.218615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.767 [2024-10-04 06:26:42.219066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.767 [2024-10-04 06:26:42.219525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.767 [2024-10-04 06:26:42.219536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.340 06:26:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.340 06:26:42 -- common/autotest_common.sh@852 -- # return 0 00:08:50.340 06:26:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:50.340 06:26:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:50.340 06:26:42 -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 06:26:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.340 06:26:43 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:50.340 06:26:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.340 06:26:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.599 [2024-10-04 06:26:43.023093] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.599 06:26:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:50.599 06:26:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.599 06:26:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.599 06:26:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:50.599 06:26:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.599 06:26:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.599 06:26:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.599 06:26:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.599 06:26:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.599 06:26:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.599 06:26:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.599 06:26:43 -- common/autotest_common.sh@10 -- # set +x 00:08:50.599 [2024-10-04 06:26:43.104878] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.599 06:26:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:50.599 06:26:43 -- target/connect_disconnect.sh@34 -- # set +x 00:08:53.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.021 06:30:28 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:36.021 06:30:28 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:36.021 06:30:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:36.021 06:30:28 -- nvmf/common.sh@116 -- # sync 00:12:36.021 06:30:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:36.021 06:30:28 -- nvmf/common.sh@119 -- # set +e 00:12:36.021 06:30:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:36.021 06:30:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:36.280 rmmod nvme_tcp 00:12:36.280 rmmod nvme_fabrics 00:12:36.280 rmmod nvme_keyring 00:12:36.280 06:30:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:36.280 06:30:28 -- nvmf/common.sh@123 -- # set -e 00:12:36.280 06:30:28 -- nvmf/common.sh@124 -- # return 0 00:12:36.280 06:30:28 -- nvmf/common.sh@477 -- # '[' -n 73621 ']' 00:12:36.280 06:30:28 -- nvmf/common.sh@478 -- # killprocess 73621 00:12:36.280 06:30:28 -- common/autotest_common.sh@926 -- # '[' -z 73621 ']' 00:12:36.280 06:30:28 -- common/autotest_common.sh@930 -- # kill -0 73621 00:12:36.280 06:30:28 -- common/autotest_common.sh@931 -- # uname 00:12:36.280 06:30:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:36.280 06:30:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73621 00:12:36.280 06:30:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:36.280 06:30:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:36.280 killing process with pid 73621 00:12:36.280 06:30:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73621' 00:12:36.280 06:30:28 -- common/autotest_common.sh@945 -- # kill 73621 00:12:36.280 06:30:28 -- common/autotest_common.sh@950 -- # wait 73621 00:12:36.539 06:30:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:36.539 06:30:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:36.539 06:30:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:36.539 06:30:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.539 06:30:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:36.539 06:30:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.539 06:30:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.539 06:30:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.539 06:30:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:36.539 00:12:36.539 real 3m47.650s 00:12:36.539 user 14m49.861s 00:12:36.539 sys 0m20.089s 00:12:36.539 06:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.539 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:12:36.539 ************************************ 00:12:36.539 END TEST nvmf_connect_disconnect 00:12:36.539 ************************************ 00:12:36.539 06:30:29 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:36.539 06:30:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:36.539 06:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.539 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:12:36.539 ************************************ 00:12:36.539 START TEST nvmf_multitarget 00:12:36.539 ************************************ 00:12:36.539 06:30:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:36.799 * Looking for test storage... 00:12:36.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.799 06:30:29 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.799 06:30:29 -- nvmf/common.sh@7 -- # uname -s 00:12:36.799 06:30:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.799 06:30:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.799 06:30:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.799 06:30:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.799 06:30:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.799 06:30:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.799 06:30:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.799 06:30:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.799 06:30:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.799 06:30:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.799 06:30:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:36.799 06:30:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:36.799 06:30:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.799 06:30:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.799 06:30:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.799 06:30:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.799 06:30:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.799 06:30:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.799 06:30:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.799 06:30:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.799 06:30:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.799 06:30:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.799 06:30:29 -- paths/export.sh@5 -- # export PATH 00:12:36.799 06:30:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.799 06:30:29 -- nvmf/common.sh@46 -- # : 0 00:12:36.799 06:30:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:36.799 06:30:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:36.799 06:30:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:36.799 06:30:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.799 06:30:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.799 06:30:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:36.799 06:30:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:36.799 06:30:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:36.799 06:30:29 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:36.799 06:30:29 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:36.799 06:30:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:36.799 06:30:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.799 06:30:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:36.799 06:30:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:36.799 06:30:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:36.799 06:30:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.799 06:30:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.799 06:30:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.799 06:30:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:36.799 06:30:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:36.799 06:30:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:36.799 06:30:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:36.799 06:30:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:36.799 06:30:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:36.799 06:30:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.799 06:30:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.799 06:30:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.799 06:30:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:36.799 06:30:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.799 06:30:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.799 06:30:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.799 06:30:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.799 06:30:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.799 06:30:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.799 06:30:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.799 06:30:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.799 06:30:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:36.799 06:30:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:36.799 Cannot find device "nvmf_tgt_br" 00:12:36.799 06:30:29 -- nvmf/common.sh@154 -- # true 00:12:36.799 06:30:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.799 Cannot find device "nvmf_tgt_br2" 00:12:36.799 06:30:29 -- nvmf/common.sh@155 -- # true 00:12:36.799 06:30:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:36.799 06:30:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:36.799 Cannot find device "nvmf_tgt_br" 00:12:36.799 06:30:29 -- nvmf/common.sh@157 -- # true 00:12:36.799 06:30:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:36.799 Cannot find device "nvmf_tgt_br2" 00:12:36.799 06:30:29 -- nvmf/common.sh@158 -- # true 00:12:36.799 06:30:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:36.799 06:30:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:36.799 06:30:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.799 06:30:29 -- nvmf/common.sh@161 -- # true 00:12:36.799 06:30:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.799 06:30:29 -- nvmf/common.sh@162 -- # true 00:12:36.799 06:30:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.799 06:30:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.799 06:30:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.799 06:30:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.799 06:30:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.058 06:30:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.058 06:30:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.058 06:30:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:37.058 06:30:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:37.058 06:30:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:37.058 06:30:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:37.058 06:30:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:37.058 06:30:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:37.058 06:30:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.058 06:30:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.058 06:30:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.058 06:30:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:37.058 06:30:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:37.058 06:30:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.058 06:30:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.058 06:30:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.058 06:30:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.058 06:30:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.058 06:30:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:37.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:37.058 00:12:37.058 --- 10.0.0.2 ping statistics --- 00:12:37.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.058 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:37.058 06:30:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:37.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:37.059 00:12:37.059 --- 10.0.0.3 ping statistics --- 00:12:37.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.059 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:37.059 06:30:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:37.059 00:12:37.059 --- 10.0.0.1 ping statistics --- 00:12:37.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.059 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:37.059 06:30:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.059 06:30:29 -- nvmf/common.sh@421 -- # return 0 00:12:37.059 06:30:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:37.059 06:30:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.059 06:30:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:37.059 06:30:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:37.059 06:30:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.059 06:30:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:37.059 06:30:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:37.059 06:30:29 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:37.059 06:30:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:37.059 06:30:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:37.059 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.059 06:30:29 -- nvmf/common.sh@469 -- # nvmfpid=77415 00:12:37.059 06:30:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.059 06:30:29 -- nvmf/common.sh@470 -- # waitforlisten 77415 00:12:37.059 06:30:29 -- common/autotest_common.sh@819 -- # '[' -z 77415 ']' 00:12:37.059 06:30:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.059 06:30:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:37.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.059 06:30:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.059 06:30:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:37.059 06:30:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.059 [2024-10-04 06:30:29.723718] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:12:37.059 [2024-10-04 06:30:29.724037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.318 [2024-10-04 06:30:29.863286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.318 [2024-10-04 06:30:29.954901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.318 [2024-10-04 06:30:29.955243] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.318 [2024-10-04 06:30:29.955375] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.318 [2024-10-04 06:30:29.955617] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.318 [2024-10-04 06:30:29.955909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.318 [2024-10-04 06:30:29.955978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.318 [2024-10-04 06:30:29.956093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.318 [2024-10-04 06:30:29.956086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.255 06:30:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:38.255 06:30:30 -- common/autotest_common.sh@852 -- # return 0 00:12:38.255 06:30:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:38.255 06:30:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:38.255 06:30:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.255 06:30:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.255 06:30:30 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:38.255 06:30:30 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.255 06:30:30 -- target/multitarget.sh@21 -- # jq length 00:12:38.513 06:30:30 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:38.513 06:30:30 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:38.513 "nvmf_tgt_1" 00:12:38.513 06:30:31 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:38.772 "nvmf_tgt_2" 00:12:38.772 06:30:31 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:38.772 06:30:31 -- target/multitarget.sh@28 -- # jq length 00:12:38.772 06:30:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:38.772 06:30:31 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:39.031 true 00:12:39.031 06:30:31 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:39.290 true 00:12:39.290 06:30:31 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.290 06:30:31 -- target/multitarget.sh@35 -- # jq length 00:12:39.290 06:30:31 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:39.290 06:30:31 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:39.290 06:30:31 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:39.290 06:30:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:39.290 06:30:31 -- nvmf/common.sh@116 -- # sync 00:12:39.290 06:30:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:39.290 06:30:31 -- nvmf/common.sh@119 -- # set +e 00:12:39.290 06:30:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:39.290 06:30:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:39.290 rmmod nvme_tcp 00:12:39.290 rmmod nvme_fabrics 00:12:39.290 rmmod nvme_keyring 00:12:39.290 06:30:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:39.549 06:30:31 -- nvmf/common.sh@123 -- # set -e 00:12:39.549 06:30:31 -- nvmf/common.sh@124 -- # return 0 00:12:39.549 06:30:31 -- nvmf/common.sh@477 -- # '[' -n 77415 ']' 00:12:39.549 06:30:31 -- nvmf/common.sh@478 -- # killprocess 77415 00:12:39.549 06:30:31 -- common/autotest_common.sh@926 -- # '[' -z 77415 ']' 00:12:39.549 06:30:31 -- common/autotest_common.sh@930 -- # kill -0 77415 00:12:39.549 06:30:31 -- common/autotest_common.sh@931 -- # uname 00:12:39.549 06:30:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:39.549 06:30:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77415 00:12:39.549 killing process with pid 77415 00:12:39.549 06:30:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:39.549 06:30:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:39.549 06:30:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77415' 00:12:39.549 06:30:32 -- common/autotest_common.sh@945 -- # kill 77415 00:12:39.549 06:30:32 -- common/autotest_common.sh@950 -- # wait 77415 00:12:39.549 06:30:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:39.549 06:30:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:39.549 06:30:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:39.549 06:30:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.549 06:30:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:39.549 06:30:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.549 06:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.549 06:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.808 06:30:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:39.808 ************************************ 00:12:39.808 END TEST nvmf_multitarget 00:12:39.808 ************************************ 00:12:39.808 00:12:39.808 real 0m3.107s 00:12:39.808 user 0m10.414s 00:12:39.808 sys 0m0.720s 00:12:39.808 06:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.808 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 06:30:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.808 06:30:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:39.808 06:30:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.808 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:12:39.808 ************************************ 00:12:39.808 START TEST nvmf_rpc 00:12:39.808 ************************************ 00:12:39.808 06:30:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:39.808 * Looking for test storage... 00:12:39.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:39.808 06:30:32 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.808 06:30:32 -- nvmf/common.sh@7 -- # uname -s 00:12:39.808 06:30:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.808 06:30:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.808 06:30:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.808 06:30:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.808 06:30:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.808 06:30:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.808 06:30:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.808 06:30:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.808 06:30:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.808 06:30:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.808 06:30:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:39.808 06:30:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:39.808 06:30:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.809 06:30:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.809 06:30:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.809 06:30:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.809 06:30:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.809 06:30:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.809 06:30:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.809 06:30:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.809 06:30:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.809 06:30:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.809 06:30:32 -- paths/export.sh@5 -- # export PATH 00:12:39.809 06:30:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.809 06:30:32 -- nvmf/common.sh@46 -- # : 0 00:12:39.809 06:30:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:39.809 06:30:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:39.809 06:30:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:39.809 06:30:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.809 06:30:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.809 06:30:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:39.809 06:30:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:39.809 06:30:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:39.809 06:30:32 -- target/rpc.sh@11 -- # loops=5 00:12:39.809 06:30:32 -- target/rpc.sh@23 -- # nvmftestinit 00:12:39.809 06:30:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:39.809 06:30:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.809 06:30:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:39.809 06:30:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:39.809 06:30:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:39.809 06:30:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.809 06:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.809 06:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.809 06:30:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:39.809 06:30:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:39.809 06:30:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:39.809 06:30:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:39.809 06:30:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:39.809 06:30:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:39.809 06:30:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.809 06:30:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.809 06:30:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:39.809 06:30:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:39.809 06:30:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:39.809 06:30:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:39.809 06:30:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:39.809 06:30:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.809 06:30:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:39.809 06:30:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:39.809 06:30:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:39.809 06:30:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:39.809 06:30:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:39.809 06:30:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:39.809 Cannot find device "nvmf_tgt_br" 00:12:39.809 06:30:32 -- nvmf/common.sh@154 -- # true 00:12:39.809 06:30:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.809 Cannot find device "nvmf_tgt_br2" 00:12:39.809 06:30:32 -- nvmf/common.sh@155 -- # true 00:12:39.809 06:30:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:39.809 06:30:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:39.809 Cannot find device "nvmf_tgt_br" 00:12:39.809 06:30:32 -- nvmf/common.sh@157 -- # true 00:12:39.809 06:30:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:40.068 Cannot find device "nvmf_tgt_br2" 00:12:40.068 06:30:32 -- nvmf/common.sh@158 -- # true 00:12:40.068 06:30:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:40.068 06:30:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:40.068 06:30:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:40.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.068 06:30:32 -- nvmf/common.sh@161 -- # true 00:12:40.068 06:30:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:40.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.068 06:30:32 -- nvmf/common.sh@162 -- # true 00:12:40.068 06:30:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:40.068 06:30:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:40.068 06:30:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:40.068 06:30:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:40.068 06:30:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:40.068 06:30:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:40.068 06:30:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:40.068 06:30:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:40.068 06:30:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:40.068 06:30:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:40.068 06:30:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:40.068 06:30:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:40.068 06:30:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:40.068 06:30:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:40.068 06:30:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:40.068 06:30:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:40.068 06:30:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:40.068 06:30:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:40.068 06:30:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:40.068 06:30:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:40.068 06:30:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:40.068 06:30:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:40.068 06:30:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:40.327 06:30:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:40.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:12:40.327 00:12:40.327 --- 10.0.0.2 ping statistics --- 00:12:40.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.327 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:40.327 06:30:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:40.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:40.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:40.327 00:12:40.327 --- 10.0.0.3 ping statistics --- 00:12:40.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.327 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:40.327 06:30:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:40.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:40.327 00:12:40.327 --- 10.0.0.1 ping statistics --- 00:12:40.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.327 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:40.327 06:30:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.327 06:30:32 -- nvmf/common.sh@421 -- # return 0 00:12:40.327 06:30:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:40.327 06:30:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.327 06:30:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:40.327 06:30:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:40.327 06:30:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.327 06:30:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:40.327 06:30:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:40.327 06:30:32 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:40.327 06:30:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:40.327 06:30:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:40.327 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:12:40.327 06:30:32 -- nvmf/common.sh@469 -- # nvmfpid=77652 00:12:40.327 06:30:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.327 06:30:32 -- nvmf/common.sh@470 -- # waitforlisten 77652 00:12:40.327 06:30:32 -- common/autotest_common.sh@819 -- # '[' -z 77652 ']' 00:12:40.327 06:30:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.327 06:30:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:40.327 06:30:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.327 06:30:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:40.327 06:30:32 -- common/autotest_common.sh@10 -- # set +x 00:12:40.327 [2024-10-04 06:30:32.851201] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:12:40.327 [2024-10-04 06:30:32.851553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.327 [2024-10-04 06:30:32.995515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.586 [2024-10-04 06:30:33.080997] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:40.586 [2024-10-04 06:30:33.081425] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.586 [2024-10-04 06:30:33.081640] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.586 [2024-10-04 06:30:33.081930] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.586 [2024-10-04 06:30:33.082176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.586 [2024-10-04 06:30:33.082260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.586 [2024-10-04 06:30:33.082409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.586 [2024-10-04 06:30:33.082421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.523 06:30:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:41.523 06:30:33 -- common/autotest_common.sh@852 -- # return 0 00:12:41.523 06:30:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:41.523 06:30:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:41.523 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.523 06:30:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.523 06:30:33 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:41.523 06:30:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.523 06:30:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.523 06:30:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.523 06:30:33 -- target/rpc.sh@26 -- # stats='{ 00:12:41.523 "poll_groups": [ 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_0", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [] 00:12:41.523 }, 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_1", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [] 00:12:41.523 }, 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_2", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [] 00:12:41.523 }, 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_3", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [] 00:12:41.523 } 00:12:41.523 ], 00:12:41.523 "tick_rate": 2200000000 00:12:41.523 }' 00:12:41.523 06:30:33 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:41.523 06:30:33 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:41.523 06:30:33 -- target/rpc.sh@15 -- # wc -l 00:12:41.523 06:30:33 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:41.523 06:30:34 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:41.523 06:30:34 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:41.523 06:30:34 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:41.523 06:30:34 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.523 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.523 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.523 [2024-10-04 06:30:34.079334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.523 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.523 06:30:34 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:41.523 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.523 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.523 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.523 06:30:34 -- target/rpc.sh@33 -- # stats='{ 00:12:41.523 "poll_groups": [ 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_0", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [ 00:12:41.523 { 00:12:41.523 "trtype": "TCP" 00:12:41.523 } 00:12:41.523 ] 00:12:41.523 }, 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_1", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [ 00:12:41.523 { 00:12:41.523 "trtype": "TCP" 00:12:41.523 } 00:12:41.523 ] 00:12:41.523 }, 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_2", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [ 00:12:41.523 { 00:12:41.523 "trtype": "TCP" 00:12:41.523 } 00:12:41.523 ] 00:12:41.523 }, 00:12:41.523 { 00:12:41.523 "admin_qpairs": 0, 00:12:41.523 "completed_nvme_io": 0, 00:12:41.523 "current_admin_qpairs": 0, 00:12:41.523 "current_io_qpairs": 0, 00:12:41.523 "io_qpairs": 0, 00:12:41.523 "name": "nvmf_tgt_poll_group_3", 00:12:41.523 "pending_bdev_io": 0, 00:12:41.523 "transports": [ 00:12:41.523 { 00:12:41.523 "trtype": "TCP" 00:12:41.523 } 00:12:41.523 ] 00:12:41.523 } 00:12:41.523 ], 00:12:41.523 "tick_rate": 2200000000 00:12:41.523 }' 00:12:41.523 06:30:34 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.523 06:30:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.523 06:30:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.523 06:30:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.523 06:30:34 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:41.523 06:30:34 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:41.523 06:30:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:41.523 06:30:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:41.523 06:30:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:41.782 06:30:34 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:41.782 06:30:34 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:41.782 06:30:34 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:41.782 06:30:34 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:41.782 06:30:34 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:41.782 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.782 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.782 Malloc1 00:12:41.782 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.782 06:30:34 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:41.782 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.782 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.782 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.782 06:30:34 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:41.782 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.782 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.782 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.782 06:30:34 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:41.782 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.782 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.782 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.782 06:30:34 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.782 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.782 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.782 [2024-10-04 06:30:34.292650] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.782 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.782 06:30:34 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -a 10.0.0.2 -s 4420 00:12:41.782 06:30:34 -- common/autotest_common.sh@640 -- # local es=0 00:12:41.783 06:30:34 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -a 10.0.0.2 -s 4420 00:12:41.783 06:30:34 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:41.783 06:30:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:41.783 06:30:34 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:41.783 06:30:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:41.783 06:30:34 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:41.783 06:30:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:41.783 06:30:34 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:41.783 06:30:34 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:41.783 06:30:34 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -a 10.0.0.2 -s 4420 00:12:41.783 [2024-10-04 06:30:34.318962] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c' 00:12:41.783 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:41.783 could not add new controller: failed to write to nvme-fabrics device 00:12:41.783 06:30:34 -- common/autotest_common.sh@643 -- # es=1 00:12:41.783 06:30:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:41.783 06:30:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:41.783 06:30:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:41.783 06:30:34 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:41.783 06:30:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.783 06:30:34 -- common/autotest_common.sh@10 -- # set +x 00:12:41.783 06:30:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.783 06:30:34 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.041 06:30:34 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.041 06:30:34 -- common/autotest_common.sh@1177 -- # local i=0 00:12:42.041 06:30:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.041 06:30:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:42.041 06:30:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.944 06:30:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.944 06:30:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.944 06:30:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.944 06:30:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.944 06:30:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.944 06:30:36 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.944 06:30:36 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.944 06:30:36 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.944 06:30:36 -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.944 06:30:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:43.944 06:30:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.944 06:30:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.944 06:30:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.945 06:30:36 -- common/autotest_common.sh@1210 -- # return 0 00:12:43.945 06:30:36 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:43.945 06:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.945 06:30:36 -- common/autotest_common.sh@10 -- # set +x 00:12:43.945 06:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.945 06:30:36 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.945 06:30:36 -- common/autotest_common.sh@640 -- # local es=0 00:12:43.945 06:30:36 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.945 06:30:36 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:43.945 06:30:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.945 06:30:36 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:43.945 06:30:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.945 06:30:36 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:43.945 06:30:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.945 06:30:36 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:43.945 06:30:36 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:43.945 06:30:36 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.204 [2024-10-04 06:30:36.628658] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c' 00:12:44.204 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.204 could not add new controller: failed to write to nvme-fabrics device 00:12:44.204 06:30:36 -- common/autotest_common.sh@643 -- # es=1 00:12:44.204 06:30:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:44.204 06:30:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:44.204 06:30:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:44.204 06:30:36 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:44.204 06:30:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.204 06:30:36 -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 06:30:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.204 06:30:36 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.204 06:30:36 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.204 06:30:36 -- common/autotest_common.sh@1177 -- # local i=0 00:12:44.204 06:30:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.204 06:30:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:44.204 06:30:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:46.734 06:30:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:46.734 06:30:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:46.734 06:30:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.734 06:30:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:46.734 06:30:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.734 06:30:38 -- common/autotest_common.sh@1187 -- # return 0 00:12:46.734 06:30:38 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.734 06:30:38 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.734 06:30:38 -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.734 06:30:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:46.734 06:30:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.734 06:30:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:46.734 06:30:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.734 06:30:38 -- common/autotest_common.sh@1210 -- # return 0 00:12:46.734 06:30:38 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.734 06:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.734 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:12:46.734 06:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.734 06:30:38 -- target/rpc.sh@81 -- # seq 1 5 00:12:46.734 06:30:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.734 06:30:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.734 06:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.734 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:12:46.734 06:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.734 06:30:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.734 06:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.734 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:12:46.734 [2024-10-04 06:30:38.938423] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.734 06:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.734 06:30:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.734 06:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.734 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:12:46.734 06:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.734 06:30:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.734 06:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.734 06:30:38 -- common/autotest_common.sh@10 -- # set +x 00:12:46.734 06:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.734 06:30:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.734 06:30:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.734 06:30:39 -- common/autotest_common.sh@1177 -- # local i=0 00:12:46.734 06:30:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.734 06:30:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:46.734 06:30:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:48.635 06:30:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:48.635 06:30:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:48.635 06:30:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.635 06:30:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:48.636 06:30:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.636 06:30:41 -- common/autotest_common.sh@1187 -- # return 0 00:12:48.636 06:30:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.636 06:30:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.636 06:30:41 -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.636 06:30:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:48.636 06:30:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.636 06:30:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:48.636 06:30:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.894 06:30:41 -- common/autotest_common.sh@1210 -- # return 0 00:12:48.894 06:30:41 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.894 06:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.894 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 06:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.894 06:30:41 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.894 06:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.894 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 06:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.894 06:30:41 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.894 06:30:41 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.894 06:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.894 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 06:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.894 06:30:41 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.894 06:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.894 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 [2024-10-04 06:30:41.351036] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.894 06:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.894 06:30:41 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.894 06:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.894 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 06:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.894 06:30:41 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.894 06:30:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.894 06:30:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.894 06:30:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.894 06:30:41 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.894 06:30:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.894 06:30:41 -- common/autotest_common.sh@1177 -- # local i=0 00:12:48.894 06:30:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.894 06:30:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:48.894 06:30:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:51.453 06:30:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:51.453 06:30:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:51.453 06:30:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.453 06:30:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:51.453 06:30:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.453 06:30:43 -- common/autotest_common.sh@1187 -- # return 0 00:12:51.453 06:30:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.453 06:30:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.453 06:30:43 -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.453 06:30:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:51.453 06:30:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.453 06:30:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:51.453 06:30:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.453 06:30:43 -- common/autotest_common.sh@1210 -- # return 0 00:12:51.453 06:30:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.453 06:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.453 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:12:51.453 06:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.453 06:30:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.453 06:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.453 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:12:51.453 06:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.453 06:30:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.453 06:30:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.453 06:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.453 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:12:51.453 06:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.453 06:30:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.453 06:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.453 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:12:51.453 [2024-10-04 06:30:43.655918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.453 06:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.453 06:30:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.453 06:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.453 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:12:51.453 06:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.453 06:30:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.453 06:30:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.453 06:30:43 -- common/autotest_common.sh@10 -- # set +x 00:12:51.453 06:30:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.453 06:30:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.453 06:30:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.453 06:30:43 -- common/autotest_common.sh@1177 -- # local i=0 00:12:51.453 06:30:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.453 06:30:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:51.453 06:30:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:53.361 06:30:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:53.361 06:30:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:53.361 06:30:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.361 06:30:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:53.361 06:30:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.361 06:30:45 -- common/autotest_common.sh@1187 -- # return 0 00:12:53.361 06:30:45 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.361 06:30:45 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.361 06:30:45 -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.361 06:30:45 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:53.361 06:30:45 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.361 06:30:45 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:53.361 06:30:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.361 06:30:45 -- common/autotest_common.sh@1210 -- # return 0 00:12:53.361 06:30:45 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.361 06:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.361 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:12:53.361 06:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.361 06:30:45 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.361 06:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.361 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:12:53.361 06:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.361 06:30:45 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.361 06:30:45 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.361 06:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.361 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:12:53.361 06:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.361 06:30:45 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.361 06:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.361 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:12:53.361 [2024-10-04 06:30:45.964566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.361 06:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.361 06:30:45 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.361 06:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.361 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:12:53.361 06:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.361 06:30:45 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.361 06:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.361 06:30:45 -- common/autotest_common.sh@10 -- # set +x 00:12:53.361 06:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.361 06:30:45 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.620 06:30:46 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.620 06:30:46 -- common/autotest_common.sh@1177 -- # local i=0 00:12:53.620 06:30:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.620 06:30:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:53.620 06:30:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:55.524 06:30:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:55.524 06:30:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:55.524 06:30:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.524 06:30:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:55.524 06:30:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.524 06:30:48 -- common/autotest_common.sh@1187 -- # return 0 00:12:55.524 06:30:48 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.783 06:30:48 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.783 06:30:48 -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.783 06:30:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:55.783 06:30:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.783 06:30:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:55.783 06:30:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.783 06:30:48 -- common/autotest_common.sh@1210 -- # return 0 00:12:55.783 06:30:48 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.783 06:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.783 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:12:55.783 06:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.783 06:30:48 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.783 06:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.783 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:12:55.783 06:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.783 06:30:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.783 06:30:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.783 06:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.783 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:12:55.783 06:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.783 06:30:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.783 06:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.783 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:12:55.783 [2024-10-04 06:30:48.385397] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.783 06:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.783 06:30:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.783 06:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.783 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:12:55.783 06:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.783 06:30:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.783 06:30:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.783 06:30:48 -- common/autotest_common.sh@10 -- # set +x 00:12:55.783 06:30:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.783 06:30:48 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.042 06:30:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.042 06:30:48 -- common/autotest_common.sh@1177 -- # local i=0 00:12:56.042 06:30:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.042 06:30:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:56.042 06:30:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:57.946 06:30:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:57.946 06:30:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:57.946 06:30:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.946 06:30:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:57.946 06:30:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.946 06:30:50 -- common/autotest_common.sh@1187 -- # return 0 00:12:57.946 06:30:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.205 06:30:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.205 06:30:50 -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.205 06:30:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:58.205 06:30:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.205 06:30:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:58.205 06:30:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.205 06:30:50 -- common/autotest_common.sh@1210 -- # return 0 00:12:58.205 06:30:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@99 -- # seq 1 5 00:12:58.205 06:30:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.205 06:30:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 [2024-10-04 06:30:50.705437] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.205 06:30:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.205 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.205 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.205 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.205 06:30:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 [2024-10-04 06:30:50.753480] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.206 06:30:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 [2024-10-04 06:30:50.805565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.206 06:30:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 [2024-10-04 06:30:50.853675] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.206 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.206 06:30:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.206 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.206 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:58.465 06:30:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.465 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.465 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.465 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.465 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 [2024-10-04 06:30:50.901759] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.465 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.465 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.465 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.465 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.465 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.465 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.465 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.465 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:58.465 06:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.465 06:30:50 -- common/autotest_common.sh@10 -- # set +x 00:12:58.465 06:30:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.465 06:30:50 -- target/rpc.sh@110 -- # stats='{ 00:12:58.465 "poll_groups": [ 00:12:58.465 { 00:12:58.465 "admin_qpairs": 2, 00:12:58.465 "completed_nvme_io": 115, 00:12:58.465 "current_admin_qpairs": 0, 00:12:58.465 "current_io_qpairs": 0, 00:12:58.465 "io_qpairs": 16, 00:12:58.465 "name": "nvmf_tgt_poll_group_0", 00:12:58.465 "pending_bdev_io": 0, 00:12:58.465 "transports": [ 00:12:58.465 { 00:12:58.465 "trtype": "TCP" 00:12:58.465 } 00:12:58.465 ] 00:12:58.465 }, 00:12:58.465 { 00:12:58.465 "admin_qpairs": 3, 00:12:58.465 "completed_nvme_io": 167, 00:12:58.465 "current_admin_qpairs": 0, 00:12:58.465 "current_io_qpairs": 0, 00:12:58.465 "io_qpairs": 17, 00:12:58.465 "name": "nvmf_tgt_poll_group_1", 00:12:58.465 "pending_bdev_io": 0, 00:12:58.465 "transports": [ 00:12:58.465 { 00:12:58.465 "trtype": "TCP" 00:12:58.465 } 00:12:58.465 ] 00:12:58.465 }, 00:12:58.465 { 00:12:58.465 "admin_qpairs": 1, 00:12:58.465 "completed_nvme_io": 70, 00:12:58.465 "current_admin_qpairs": 0, 00:12:58.465 "current_io_qpairs": 0, 00:12:58.465 "io_qpairs": 19, 00:12:58.465 "name": "nvmf_tgt_poll_group_2", 00:12:58.465 "pending_bdev_io": 0, 00:12:58.465 "transports": [ 00:12:58.465 { 00:12:58.465 "trtype": "TCP" 00:12:58.465 } 00:12:58.465 ] 00:12:58.465 }, 00:12:58.465 { 00:12:58.465 "admin_qpairs": 1, 00:12:58.465 "completed_nvme_io": 68, 00:12:58.465 "current_admin_qpairs": 0, 00:12:58.465 "current_io_qpairs": 0, 00:12:58.465 "io_qpairs": 18, 00:12:58.465 "name": "nvmf_tgt_poll_group_3", 00:12:58.465 "pending_bdev_io": 0, 00:12:58.465 "transports": [ 00:12:58.465 { 00:12:58.465 "trtype": "TCP" 00:12:58.465 } 00:12:58.465 ] 00:12:58.465 } 00:12:58.465 ], 00:12:58.465 "tick_rate": 2200000000 00:12:58.465 }' 00:12:58.465 06:30:50 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:58.465 06:30:50 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:58.465 06:30:50 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:58.465 06:30:50 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.465 06:30:51 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:58.465 06:30:51 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.465 06:30:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.465 06:30:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.465 06:30:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.465 06:30:51 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:58.465 06:30:51 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:58.465 06:30:51 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:58.465 06:30:51 -- target/rpc.sh@123 -- # nvmftestfini 00:12:58.465 06:30:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.465 06:30:51 -- nvmf/common.sh@116 -- # sync 00:12:58.465 06:30:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.465 06:30:51 -- nvmf/common.sh@119 -- # set +e 00:12:58.465 06:30:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.465 06:30:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.465 rmmod nvme_tcp 00:12:58.465 rmmod nvme_fabrics 00:12:58.725 rmmod nvme_keyring 00:12:58.725 06:30:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.725 06:30:51 -- nvmf/common.sh@123 -- # set -e 00:12:58.725 06:30:51 -- nvmf/common.sh@124 -- # return 0 00:12:58.725 06:30:51 -- nvmf/common.sh@477 -- # '[' -n 77652 ']' 00:12:58.725 06:30:51 -- nvmf/common.sh@478 -- # killprocess 77652 00:12:58.725 06:30:51 -- common/autotest_common.sh@926 -- # '[' -z 77652 ']' 00:12:58.725 06:30:51 -- common/autotest_common.sh@930 -- # kill -0 77652 00:12:58.725 06:30:51 -- common/autotest_common.sh@931 -- # uname 00:12:58.725 06:30:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:58.725 06:30:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77652 00:12:58.725 killing process with pid 77652 00:12:58.725 06:30:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:58.725 06:30:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:58.725 06:30:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77652' 00:12:58.725 06:30:51 -- common/autotest_common.sh@945 -- # kill 77652 00:12:58.725 06:30:51 -- common/autotest_common.sh@950 -- # wait 77652 00:12:58.984 06:30:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.984 06:30:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.984 06:30:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.984 06:30:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.984 06:30:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.984 06:30:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.984 06:30:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.984 06:30:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.984 06:30:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:58.984 00:12:58.984 real 0m19.176s 00:12:58.984 user 1m12.976s 00:12:58.984 sys 0m1.975s 00:12:58.984 06:30:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.984 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:12:58.984 ************************************ 00:12:58.984 END TEST nvmf_rpc 00:12:58.984 ************************************ 00:12:58.984 06:30:51 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:58.984 06:30:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:58.984 06:30:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.984 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:12:58.984 ************************************ 00:12:58.984 START TEST nvmf_invalid 00:12:58.984 ************************************ 00:12:58.984 06:30:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:58.984 * Looking for test storage... 00:12:58.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.984 06:30:51 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.984 06:30:51 -- nvmf/common.sh@7 -- # uname -s 00:12:58.984 06:30:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.984 06:30:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.984 06:30:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.984 06:30:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.984 06:30:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.984 06:30:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.984 06:30:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.984 06:30:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.984 06:30:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.984 06:30:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.984 06:30:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:58.984 06:30:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:12:58.984 06:30:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.984 06:30:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.984 06:30:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.984 06:30:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.984 06:30:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.984 06:30:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.984 06:30:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.984 06:30:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.984 06:30:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.984 06:30:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.984 06:30:51 -- paths/export.sh@5 -- # export PATH 00:12:58.984 06:30:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.984 06:30:51 -- nvmf/common.sh@46 -- # : 0 00:12:58.984 06:30:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:58.984 06:30:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:58.984 06:30:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:58.984 06:30:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.984 06:30:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.984 06:30:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:58.984 06:30:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:58.984 06:30:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:58.984 06:30:51 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:58.984 06:30:51 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:58.984 06:30:51 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:58.984 06:30:51 -- target/invalid.sh@14 -- # target=foobar 00:12:58.985 06:30:51 -- target/invalid.sh@16 -- # RANDOM=0 00:12:58.985 06:30:51 -- target/invalid.sh@34 -- # nvmftestinit 00:12:58.985 06:30:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:58.985 06:30:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.985 06:30:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:58.985 06:30:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:58.985 06:30:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:58.985 06:30:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.985 06:30:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.985 06:30:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.985 06:30:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:58.985 06:30:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:58.985 06:30:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:58.985 06:30:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:58.985 06:30:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:58.985 06:30:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:58.985 06:30:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.985 06:30:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.985 06:30:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:58.985 06:30:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:58.985 06:30:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.985 06:30:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.985 06:30:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.985 06:30:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.985 06:30:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.985 06:30:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.985 06:30:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.985 06:30:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.985 06:30:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:58.985 06:30:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:59.244 Cannot find device "nvmf_tgt_br" 00:12:59.244 06:30:51 -- nvmf/common.sh@154 -- # true 00:12:59.244 06:30:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.244 Cannot find device "nvmf_tgt_br2" 00:12:59.244 06:30:51 -- nvmf/common.sh@155 -- # true 00:12:59.244 06:30:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:59.244 06:30:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:59.244 Cannot find device "nvmf_tgt_br" 00:12:59.244 06:30:51 -- nvmf/common.sh@157 -- # true 00:12:59.244 06:30:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:59.244 Cannot find device "nvmf_tgt_br2" 00:12:59.244 06:30:51 -- nvmf/common.sh@158 -- # true 00:12:59.244 06:30:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:59.244 06:30:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:59.244 06:30:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.244 06:30:51 -- nvmf/common.sh@161 -- # true 00:12:59.244 06:30:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.244 06:30:51 -- nvmf/common.sh@162 -- # true 00:12:59.244 06:30:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.244 06:30:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.244 06:30:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.244 06:30:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.244 06:30:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.244 06:30:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.244 06:30:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.244 06:30:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:59.244 06:30:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:59.244 06:30:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:59.244 06:30:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:59.244 06:30:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:59.244 06:30:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:59.244 06:30:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.244 06:30:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.244 06:30:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.244 06:30:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:59.244 06:30:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:59.244 06:30:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.244 06:30:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.503 06:30:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.503 06:30:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.503 06:30:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.503 06:30:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:59.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:59.503 00:12:59.503 --- 10.0.0.2 ping statistics --- 00:12:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.503 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:59.503 06:30:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:59.503 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.503 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:59.503 00:12:59.503 --- 10.0.0.3 ping statistics --- 00:12:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.503 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:59.503 06:30:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:59.503 00:12:59.503 --- 10.0.0.1 ping statistics --- 00:12:59.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.503 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:59.503 06:30:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.503 06:30:51 -- nvmf/common.sh@421 -- # return 0 00:12:59.503 06:30:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:59.503 06:30:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.503 06:30:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:59.503 06:30:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:59.503 06:30:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.503 06:30:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:59.503 06:30:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:59.503 06:30:51 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:59.503 06:30:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:59.503 06:30:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:59.503 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:12:59.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.503 06:30:51 -- nvmf/common.sh@469 -- # nvmfpid=78164 00:12:59.503 06:30:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.503 06:30:51 -- nvmf/common.sh@470 -- # waitforlisten 78164 00:12:59.503 06:30:51 -- common/autotest_common.sh@819 -- # '[' -z 78164 ']' 00:12:59.503 06:30:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.503 06:30:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:59.503 06:30:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.503 06:30:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:59.503 06:30:51 -- common/autotest_common.sh@10 -- # set +x 00:12:59.503 [2024-10-04 06:30:52.056176] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:12:59.503 [2024-10-04 06:30:52.056269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.762 [2024-10-04 06:30:52.197419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.762 [2024-10-04 06:30:52.275097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:59.762 [2024-10-04 06:30:52.275519] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.762 [2024-10-04 06:30:52.275635] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.762 [2024-10-04 06:30:52.275765] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.762 [2024-10-04 06:30:52.275926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.762 [2024-10-04 06:30:52.276067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.762 [2024-10-04 06:30:52.276665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.762 [2024-10-04 06:30:52.276701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.705 06:30:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:00.705 06:30:53 -- common/autotest_common.sh@852 -- # return 0 00:13:00.705 06:30:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:00.705 06:30:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:00.705 06:30:53 -- common/autotest_common.sh@10 -- # set +x 00:13:00.705 06:30:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.705 06:30:53 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:00.705 06:30:53 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5972 00:13:00.964 [2024-10-04 06:30:53.467170] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:00.964 06:30:53 -- target/invalid.sh@40 -- # out='2024/10/04 06:30:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5972 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:00.964 request: 00:13:00.964 { 00:13:00.964 "method": "nvmf_create_subsystem", 00:13:00.964 "params": { 00:13:00.964 "nqn": "nqn.2016-06.io.spdk:cnode5972", 00:13:00.964 "tgt_name": "foobar" 00:13:00.964 } 00:13:00.964 } 00:13:00.964 Got JSON-RPC error response 00:13:00.964 GoRPCClient: error on JSON-RPC call' 00:13:00.964 06:30:53 -- target/invalid.sh@41 -- # [[ 2024/10/04 06:30:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode5972 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:00.964 request: 00:13:00.964 { 00:13:00.964 "method": "nvmf_create_subsystem", 00:13:00.964 "params": { 00:13:00.964 "nqn": "nqn.2016-06.io.spdk:cnode5972", 00:13:00.964 "tgt_name": "foobar" 00:13:00.964 } 00:13:00.964 } 00:13:00.964 Got JSON-RPC error response 00:13:00.964 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:00.964 06:30:53 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:00.964 06:30:53 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2675 00:13:01.223 [2024-10-04 06:30:53.755656] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2675: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:01.223 06:30:53 -- target/invalid.sh@45 -- # out='2024/10/04 06:30:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2675 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:01.223 request: 00:13:01.223 { 00:13:01.223 "method": "nvmf_create_subsystem", 00:13:01.223 "params": { 00:13:01.223 "nqn": "nqn.2016-06.io.spdk:cnode2675", 00:13:01.223 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:01.223 } 00:13:01.223 } 00:13:01.223 Got JSON-RPC error response 00:13:01.223 GoRPCClient: error on JSON-RPC call' 00:13:01.223 06:30:53 -- target/invalid.sh@46 -- # [[ 2024/10/04 06:30:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2675 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:01.223 request: 00:13:01.223 { 00:13:01.223 "method": "nvmf_create_subsystem", 00:13:01.223 "params": { 00:13:01.223 "nqn": "nqn.2016-06.io.spdk:cnode2675", 00:13:01.223 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:01.223 } 00:13:01.223 } 00:13:01.223 Got JSON-RPC error response 00:13:01.223 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:01.223 06:30:53 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:01.223 06:30:53 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17083 00:13:01.482 [2024-10-04 06:30:54.056010] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17083: invalid model number 'SPDK_Controller' 00:13:01.482 06:30:54 -- target/invalid.sh@50 -- # out='2024/10/04 06:30:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17083], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:01.482 request: 00:13:01.482 { 00:13:01.482 "method": "nvmf_create_subsystem", 00:13:01.482 "params": { 00:13:01.482 "nqn": "nqn.2016-06.io.spdk:cnode17083", 00:13:01.482 "model_number": "SPDK_Controller\u001f" 00:13:01.482 } 00:13:01.482 } 00:13:01.482 Got JSON-RPC error response 00:13:01.482 GoRPCClient: error on JSON-RPC call' 00:13:01.482 06:30:54 -- target/invalid.sh@51 -- # [[ 2024/10/04 06:30:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17083], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:01.482 request: 00:13:01.482 { 00:13:01.482 "method": "nvmf_create_subsystem", 00:13:01.482 "params": { 00:13:01.482 "nqn": "nqn.2016-06.io.spdk:cnode17083", 00:13:01.482 "model_number": "SPDK_Controller\u001f" 00:13:01.482 } 00:13:01.482 } 00:13:01.482 Got JSON-RPC error response 00:13:01.482 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.482 06:30:54 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:01.482 06:30:54 -- target/invalid.sh@19 -- # local length=21 ll 00:13:01.482 06:30:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:01.482 06:30:54 -- target/invalid.sh@21 -- # local chars 00:13:01.482 06:30:54 -- target/invalid.sh@22 -- # local string 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # printf %x 104 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # string+=h 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # printf %x 83 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # string+=S 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # printf %x 55 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # string+=7 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # printf %x 78 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # string+=N 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # printf %x 120 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:01.482 06:30:54 -- target/invalid.sh@25 -- # string+=x 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.482 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 65 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+=A 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 91 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+='[' 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 95 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+=_ 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 104 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+=h 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 47 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+=/ 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 122 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+=z 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 57 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+=9 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 61 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+== 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 97 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # string+=a 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.483 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # printf %x 110 00:13:01.483 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # string+=n 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # printf %x 76 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # string+=L 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # printf %x 59 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # string+=';' 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # printf %x 50 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # string+=2 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # printf %x 113 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # string+=q 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # printf %x 90 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # string+=Z 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # printf %x 47 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:01.740 06:30:54 -- target/invalid.sh@25 -- # string+=/ 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:01.740 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:01.740 06:30:54 -- target/invalid.sh@28 -- # [[ h == \- ]] 00:13:01.740 06:30:54 -- target/invalid.sh@31 -- # echo 'hS7NxA[_h/z9=anL;2qZ/' 00:13:01.740 06:30:54 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'hS7NxA[_h/z9=anL;2qZ/' nqn.2016-06.io.spdk:cnode14125 00:13:02.000 [2024-10-04 06:30:54.452696] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14125: invalid serial number 'hS7NxA[_h/z9=anL;2qZ/' 00:13:02.000 06:30:54 -- target/invalid.sh@54 -- # out='2024/10/04 06:30:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14125 serial_number:hS7NxA[_h/z9=anL;2qZ/], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN hS7NxA[_h/z9=anL;2qZ/ 00:13:02.000 request: 00:13:02.000 { 00:13:02.000 "method": "nvmf_create_subsystem", 00:13:02.000 "params": { 00:13:02.000 "nqn": "nqn.2016-06.io.spdk:cnode14125", 00:13:02.000 "serial_number": "hS7NxA[_h/z9=anL;2qZ/" 00:13:02.000 } 00:13:02.000 } 00:13:02.000 Got JSON-RPC error response 00:13:02.000 GoRPCClient: error on JSON-RPC call' 00:13:02.000 06:30:54 -- target/invalid.sh@55 -- # [[ 2024/10/04 06:30:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14125 serial_number:hS7NxA[_h/z9=anL;2qZ/], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN hS7NxA[_h/z9=anL;2qZ/ 00:13:02.000 request: 00:13:02.000 { 00:13:02.000 "method": "nvmf_create_subsystem", 00:13:02.000 "params": { 00:13:02.000 "nqn": "nqn.2016-06.io.spdk:cnode14125", 00:13:02.000 "serial_number": "hS7NxA[_h/z9=anL;2qZ/" 00:13:02.000 } 00:13:02.000 } 00:13:02.000 Got JSON-RPC error response 00:13:02.000 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:02.000 06:30:54 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:02.000 06:30:54 -- target/invalid.sh@19 -- # local length=41 ll 00:13:02.000 06:30:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:02.000 06:30:54 -- target/invalid.sh@21 -- # local chars 00:13:02.000 06:30:54 -- target/invalid.sh@22 -- # local string 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 47 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=/ 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 62 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+='>' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 46 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=. 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 121 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=y 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 123 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+='{' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 95 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=_ 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 120 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=x 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 80 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=P 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 120 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=x 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 68 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=D 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 125 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+='}' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 38 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+='&' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 41 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=')' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 53 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=5 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 115 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=s 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 105 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=i 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 79 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=O 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 59 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=';' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 119 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=w 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 36 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+='$' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 52 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=4 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 64 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=@ 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 119 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=w 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 62 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+='>' 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 116 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # string+=t 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.000 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # printf %x 89 00:13:02.000 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=Y 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 119 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=w 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 122 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=z 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 52 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=4 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 75 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=K 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 116 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=t 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 69 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=E 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 75 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=K 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 49 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=1 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 54 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=6 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 121 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=y 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 71 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=G 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # printf %x 71 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:02.001 06:30:54 -- target/invalid.sh@25 -- # string+=G 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.001 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.259 06:30:54 -- target/invalid.sh@25 -- # printf %x 88 00:13:02.259 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:02.259 06:30:54 -- target/invalid.sh@25 -- # string+=X 00:13:02.259 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.259 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.260 06:30:54 -- target/invalid.sh@25 -- # printf %x 43 00:13:02.260 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:02.260 06:30:54 -- target/invalid.sh@25 -- # string+=+ 00:13:02.260 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.260 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.260 06:30:54 -- target/invalid.sh@25 -- # printf %x 50 00:13:02.260 06:30:54 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:02.260 06:30:54 -- target/invalid.sh@25 -- # string+=2 00:13:02.260 06:30:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.260 06:30:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.260 06:30:54 -- target/invalid.sh@28 -- # [[ / == \- ]] 00:13:02.260 06:30:54 -- target/invalid.sh@31 -- # echo '/>.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2' 00:13:02.260 06:30:54 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '/>.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2' nqn.2016-06.io.spdk:cnode20226 00:13:02.518 [2024-10-04 06:30:54.977497] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20226: invalid model number '/>.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2' 00:13:02.518 06:30:55 -- target/invalid.sh@58 -- # out='2024/10/04 06:30:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:/>.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2 nqn:nqn.2016-06.io.spdk:cnode20226], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN />.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2 00:13:02.518 request: 00:13:02.518 { 00:13:02.518 "method": "nvmf_create_subsystem", 00:13:02.518 "params": { 00:13:02.518 "nqn": "nqn.2016-06.io.spdk:cnode20226", 00:13:02.518 "model_number": "/>.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2" 00:13:02.518 } 00:13:02.518 } 00:13:02.518 Got JSON-RPC error response 00:13:02.518 GoRPCClient: error on JSON-RPC call' 00:13:02.519 06:30:55 -- target/invalid.sh@59 -- # [[ 2024/10/04 06:30:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:/>.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2 nqn:nqn.2016-06.io.spdk:cnode20226], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN />.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2 00:13:02.519 request: 00:13:02.519 { 00:13:02.519 "method": "nvmf_create_subsystem", 00:13:02.519 "params": { 00:13:02.519 "nqn": "nqn.2016-06.io.spdk:cnode20226", 00:13:02.519 "model_number": "/>.y{_xPxD}&)5siO;w$4@w>tYwz4KtEK16yGGX+2" 00:13:02.519 } 00:13:02.519 } 00:13:02.519 Got JSON-RPC error response 00:13:02.519 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:02.519 06:30:55 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:02.778 [2024-10-04 06:30:55.257938] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.778 06:30:55 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:03.037 06:30:55 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:03.037 06:30:55 -- target/invalid.sh@67 -- # echo '' 00:13:03.037 06:30:55 -- target/invalid.sh@67 -- # head -n 1 00:13:03.037 06:30:55 -- target/invalid.sh@67 -- # IP= 00:13:03.037 06:30:55 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:03.296 [2024-10-04 06:30:55.869652] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:03.296 06:30:55 -- target/invalid.sh@69 -- # out='2024/10/04 06:30:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:03.296 request: 00:13:03.296 { 00:13:03.296 "method": "nvmf_subsystem_remove_listener", 00:13:03.296 "params": { 00:13:03.296 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.296 "listen_address": { 00:13:03.296 "trtype": "tcp", 00:13:03.296 "traddr": "", 00:13:03.296 "trsvcid": "4421" 00:13:03.296 } 00:13:03.296 } 00:13:03.296 } 00:13:03.296 Got JSON-RPC error response 00:13:03.296 GoRPCClient: error on JSON-RPC call' 00:13:03.296 06:30:55 -- target/invalid.sh@70 -- # [[ 2024/10/04 06:30:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:13:03.296 request: 00:13:03.296 { 00:13:03.296 "method": "nvmf_subsystem_remove_listener", 00:13:03.296 "params": { 00:13:03.296 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.296 "listen_address": { 00:13:03.296 "trtype": "tcp", 00:13:03.296 "traddr": "", 00:13:03.296 "trsvcid": "4421" 00:13:03.296 } 00:13:03.296 } 00:13:03.296 } 00:13:03.296 Got JSON-RPC error response 00:13:03.296 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:03.296 06:30:55 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5055 -i 0 00:13:03.555 [2024-10-04 06:30:56.093904] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5055: invalid cntlid range [0-65519] 00:13:03.555 06:30:56 -- target/invalid.sh@73 -- # out='2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5055], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:03.555 request: 00:13:03.555 { 00:13:03.555 "method": "nvmf_create_subsystem", 00:13:03.555 "params": { 00:13:03.555 "nqn": "nqn.2016-06.io.spdk:cnode5055", 00:13:03.555 "min_cntlid": 0 00:13:03.555 } 00:13:03.555 } 00:13:03.555 Got JSON-RPC error response 00:13:03.556 GoRPCClient: error on JSON-RPC call' 00:13:03.556 06:30:56 -- target/invalid.sh@74 -- # [[ 2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5055], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:13:03.556 request: 00:13:03.556 { 00:13:03.556 "method": "nvmf_create_subsystem", 00:13:03.556 "params": { 00:13:03.556 "nqn": "nqn.2016-06.io.spdk:cnode5055", 00:13:03.556 "min_cntlid": 0 00:13:03.556 } 00:13:03.556 } 00:13:03.556 Got JSON-RPC error response 00:13:03.556 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.556 06:30:56 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5059 -i 65520 00:13:03.815 [2024-10-04 06:30:56.330342] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5059: invalid cntlid range [65520-65519] 00:13:03.815 06:30:56 -- target/invalid.sh@75 -- # out='2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5059], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:03.815 request: 00:13:03.815 { 00:13:03.815 "method": "nvmf_create_subsystem", 00:13:03.815 "params": { 00:13:03.815 "nqn": "nqn.2016-06.io.spdk:cnode5059", 00:13:03.815 "min_cntlid": 65520 00:13:03.815 } 00:13:03.815 } 00:13:03.815 Got JSON-RPC error response 00:13:03.815 GoRPCClient: error on JSON-RPC call' 00:13:03.815 06:30:56 -- target/invalid.sh@76 -- # [[ 2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode5059], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:13:03.815 request: 00:13:03.815 { 00:13:03.815 "method": "nvmf_create_subsystem", 00:13:03.815 "params": { 00:13:03.815 "nqn": "nqn.2016-06.io.spdk:cnode5059", 00:13:03.815 "min_cntlid": 65520 00:13:03.815 } 00:13:03.815 } 00:13:03.815 Got JSON-RPC error response 00:13:03.815 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.815 06:30:56 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30342 -I 0 00:13:04.074 [2024-10-04 06:30:56.610808] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30342: invalid cntlid range [1-0] 00:13:04.074 06:30:56 -- target/invalid.sh@77 -- # out='2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30342], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:04.074 request: 00:13:04.074 { 00:13:04.074 "method": "nvmf_create_subsystem", 00:13:04.074 "params": { 00:13:04.074 "nqn": "nqn.2016-06.io.spdk:cnode30342", 00:13:04.074 "max_cntlid": 0 00:13:04.074 } 00:13:04.074 } 00:13:04.074 Got JSON-RPC error response 00:13:04.074 GoRPCClient: error on JSON-RPC call' 00:13:04.074 06:30:56 -- target/invalid.sh@78 -- # [[ 2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30342], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:13:04.074 request: 00:13:04.074 { 00:13:04.074 "method": "nvmf_create_subsystem", 00:13:04.074 "params": { 00:13:04.074 "nqn": "nqn.2016-06.io.spdk:cnode30342", 00:13:04.074 "max_cntlid": 0 00:13:04.074 } 00:13:04.074 } 00:13:04.074 Got JSON-RPC error response 00:13:04.074 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.074 06:30:56 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28599 -I 65520 00:13:04.332 [2024-10-04 06:30:56.855251] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28599: invalid cntlid range [1-65520] 00:13:04.332 06:30:56 -- target/invalid.sh@79 -- # out='2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28599], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:04.332 request: 00:13:04.332 { 00:13:04.332 "method": "nvmf_create_subsystem", 00:13:04.332 "params": { 00:13:04.332 "nqn": "nqn.2016-06.io.spdk:cnode28599", 00:13:04.332 "max_cntlid": 65520 00:13:04.332 } 00:13:04.332 } 00:13:04.332 Got JSON-RPC error response 00:13:04.332 GoRPCClient: error on JSON-RPC call' 00:13:04.332 06:30:56 -- target/invalid.sh@80 -- # [[ 2024/10/04 06:30:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28599], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:13:04.332 request: 00:13:04.332 { 00:13:04.332 "method": "nvmf_create_subsystem", 00:13:04.332 "params": { 00:13:04.332 "nqn": "nqn.2016-06.io.spdk:cnode28599", 00:13:04.332 "max_cntlid": 65520 00:13:04.332 } 00:13:04.332 } 00:13:04.332 Got JSON-RPC error response 00:13:04.332 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.332 06:30:56 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31897 -i 6 -I 5 00:13:04.591 [2024-10-04 06:30:57.087708] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31897: invalid cntlid range [6-5] 00:13:04.591 06:30:57 -- target/invalid.sh@83 -- # out='2024/10/04 06:30:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31897], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:04.591 request: 00:13:04.591 { 00:13:04.591 "method": "nvmf_create_subsystem", 00:13:04.591 "params": { 00:13:04.591 "nqn": "nqn.2016-06.io.spdk:cnode31897", 00:13:04.591 "min_cntlid": 6, 00:13:04.591 "max_cntlid": 5 00:13:04.591 } 00:13:04.591 } 00:13:04.591 Got JSON-RPC error response 00:13:04.591 GoRPCClient: error on JSON-RPC call' 00:13:04.591 06:30:57 -- target/invalid.sh@84 -- # [[ 2024/10/04 06:30:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode31897], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:13:04.591 request: 00:13:04.591 { 00:13:04.591 "method": "nvmf_create_subsystem", 00:13:04.591 "params": { 00:13:04.591 "nqn": "nqn.2016-06.io.spdk:cnode31897", 00:13:04.591 "min_cntlid": 6, 00:13:04.591 "max_cntlid": 5 00:13:04.591 } 00:13:04.591 } 00:13:04.591 Got JSON-RPC error response 00:13:04.591 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.591 06:30:57 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:04.591 06:30:57 -- target/invalid.sh@87 -- # out='request: 00:13:04.591 { 00:13:04.591 "name": "foobar", 00:13:04.591 "method": "nvmf_delete_target", 00:13:04.591 "req_id": 1 00:13:04.591 } 00:13:04.592 Got JSON-RPC error response 00:13:04.592 response: 00:13:04.592 { 00:13:04.592 "code": -32602, 00:13:04.592 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:04.592 }' 00:13:04.592 06:30:57 -- target/invalid.sh@88 -- # [[ request: 00:13:04.592 { 00:13:04.592 "name": "foobar", 00:13:04.592 "method": "nvmf_delete_target", 00:13:04.592 "req_id": 1 00:13:04.592 } 00:13:04.592 Got JSON-RPC error response 00:13:04.592 response: 00:13:04.592 { 00:13:04.592 "code": -32602, 00:13:04.592 "message": "The specified target doesn't exist, cannot delete it." 00:13:04.592 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:04.592 06:30:57 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:04.592 06:30:57 -- target/invalid.sh@91 -- # nvmftestfini 00:13:04.592 06:30:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:04.592 06:30:57 -- nvmf/common.sh@116 -- # sync 00:13:04.850 06:30:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:04.850 06:30:57 -- nvmf/common.sh@119 -- # set +e 00:13:04.850 06:30:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:04.850 06:30:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:04.850 rmmod nvme_tcp 00:13:04.850 rmmod nvme_fabrics 00:13:04.850 rmmod nvme_keyring 00:13:04.850 06:30:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:04.850 06:30:57 -- nvmf/common.sh@123 -- # set -e 00:13:04.850 06:30:57 -- nvmf/common.sh@124 -- # return 0 00:13:04.850 06:30:57 -- nvmf/common.sh@477 -- # '[' -n 78164 ']' 00:13:04.850 06:30:57 -- nvmf/common.sh@478 -- # killprocess 78164 00:13:04.850 06:30:57 -- common/autotest_common.sh@926 -- # '[' -z 78164 ']' 00:13:04.850 06:30:57 -- common/autotest_common.sh@930 -- # kill -0 78164 00:13:04.850 06:30:57 -- common/autotest_common.sh@931 -- # uname 00:13:04.850 06:30:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:04.850 06:30:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78164 00:13:04.850 killing process with pid 78164 00:13:04.850 06:30:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:04.850 06:30:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:04.850 06:30:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78164' 00:13:04.850 06:30:57 -- common/autotest_common.sh@945 -- # kill 78164 00:13:04.850 06:30:57 -- common/autotest_common.sh@950 -- # wait 78164 00:13:05.109 06:30:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:05.109 06:30:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:05.109 06:30:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:05.109 06:30:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.109 06:30:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:05.109 06:30:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.109 06:30:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.109 06:30:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.109 06:30:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:05.109 ************************************ 00:13:05.109 END TEST nvmf_invalid 00:13:05.109 ************************************ 00:13:05.109 00:13:05.109 real 0m6.071s 00:13:05.109 user 0m24.623s 00:13:05.109 sys 0m1.372s 00:13:05.109 06:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.109 06:30:57 -- common/autotest_common.sh@10 -- # set +x 00:13:05.109 06:30:57 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:05.110 06:30:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:05.110 06:30:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.110 06:30:57 -- common/autotest_common.sh@10 -- # set +x 00:13:05.110 ************************************ 00:13:05.110 START TEST nvmf_abort 00:13:05.110 ************************************ 00:13:05.110 06:30:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:05.110 * Looking for test storage... 00:13:05.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:05.110 06:30:57 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:05.110 06:30:57 -- nvmf/common.sh@7 -- # uname -s 00:13:05.110 06:30:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.110 06:30:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.110 06:30:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.110 06:30:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.110 06:30:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.110 06:30:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.110 06:30:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.110 06:30:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.110 06:30:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.110 06:30:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.110 06:30:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:13:05.110 06:30:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:13:05.110 06:30:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.110 06:30:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.110 06:30:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:05.110 06:30:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:05.110 06:30:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.110 06:30:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.110 06:30:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.110 06:30:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 06:30:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 06:30:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 06:30:57 -- paths/export.sh@5 -- # export PATH 00:13:05.110 06:30:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 06:30:57 -- nvmf/common.sh@46 -- # : 0 00:13:05.110 06:30:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:05.110 06:30:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:05.110 06:30:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:05.110 06:30:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.110 06:30:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.110 06:30:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:05.110 06:30:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:05.110 06:30:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:05.110 06:30:57 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.110 06:30:57 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:05.110 06:30:57 -- target/abort.sh@14 -- # nvmftestinit 00:13:05.110 06:30:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:05.110 06:30:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.110 06:30:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:05.110 06:30:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:05.110 06:30:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:05.110 06:30:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.110 06:30:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.110 06:30:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.110 06:30:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:05.110 06:30:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:05.110 06:30:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:05.110 06:30:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:05.110 06:30:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:05.110 06:30:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:05.110 06:30:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.110 06:30:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.110 06:30:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:05.110 06:30:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:05.110 06:30:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:05.110 06:30:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:05.110 06:30:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:05.110 06:30:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.110 06:30:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:05.110 06:30:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:05.110 06:30:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:05.110 06:30:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:05.110 06:30:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:05.369 06:30:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:05.369 Cannot find device "nvmf_tgt_br" 00:13:05.369 06:30:57 -- nvmf/common.sh@154 -- # true 00:13:05.369 06:30:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:05.369 Cannot find device "nvmf_tgt_br2" 00:13:05.369 06:30:57 -- nvmf/common.sh@155 -- # true 00:13:05.369 06:30:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:05.369 06:30:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:05.369 Cannot find device "nvmf_tgt_br" 00:13:05.369 06:30:57 -- nvmf/common.sh@157 -- # true 00:13:05.369 06:30:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:05.369 Cannot find device "nvmf_tgt_br2" 00:13:05.369 06:30:57 -- nvmf/common.sh@158 -- # true 00:13:05.369 06:30:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:05.369 06:30:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:05.369 06:30:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:05.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.369 06:30:57 -- nvmf/common.sh@161 -- # true 00:13:05.369 06:30:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:05.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.369 06:30:57 -- nvmf/common.sh@162 -- # true 00:13:05.369 06:30:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:05.369 06:30:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:05.369 06:30:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:05.369 06:30:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:05.369 06:30:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:05.369 06:30:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:05.369 06:30:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:05.369 06:30:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:05.369 06:30:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:05.369 06:30:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:05.369 06:30:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:05.369 06:30:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:05.369 06:30:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:05.369 06:30:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:05.369 06:30:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:05.369 06:30:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:05.369 06:30:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:05.369 06:30:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:05.369 06:30:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:05.369 06:30:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:05.369 06:30:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:05.627 06:30:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:05.627 06:30:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:05.627 06:30:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:05.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:13:05.627 00:13:05.627 --- 10.0.0.2 ping statistics --- 00:13:05.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.627 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:05.627 06:30:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:05.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:05.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:13:05.627 00:13:05.627 --- 10.0.0.3 ping statistics --- 00:13:05.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.627 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:05.627 06:30:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:05.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:05.627 00:13:05.627 --- 10.0.0.1 ping statistics --- 00:13:05.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.627 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:05.627 06:30:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.627 06:30:58 -- nvmf/common.sh@421 -- # return 0 00:13:05.627 06:30:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:05.627 06:30:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.627 06:30:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:05.627 06:30:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:05.627 06:30:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.627 06:30:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:05.627 06:30:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:05.627 06:30:58 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:05.627 06:30:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:05.627 06:30:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:05.627 06:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:05.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.627 06:30:58 -- nvmf/common.sh@469 -- # nvmfpid=78681 00:13:05.627 06:30:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:05.627 06:30:58 -- nvmf/common.sh@470 -- # waitforlisten 78681 00:13:05.627 06:30:58 -- common/autotest_common.sh@819 -- # '[' -z 78681 ']' 00:13:05.627 06:30:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.627 06:30:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:05.627 06:30:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.627 06:30:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:05.627 06:30:58 -- common/autotest_common.sh@10 -- # set +x 00:13:05.627 [2024-10-04 06:30:58.160416] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:13:05.627 [2024-10-04 06:30:58.160667] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.627 [2024-10-04 06:30:58.294595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.885 [2024-10-04 06:30:58.367216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.885 [2024-10-04 06:30:58.367761] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.885 [2024-10-04 06:30:58.367820] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.885 [2024-10-04 06:30:58.368080] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.885 [2024-10-04 06:30:58.368261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.885 [2024-10-04 06:30:58.368532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.885 [2024-10-04 06:30:58.368540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.820 06:30:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:06.820 06:30:59 -- common/autotest_common.sh@852 -- # return 0 00:13:06.820 06:30:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:06.820 06:30:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 06:30:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.820 06:30:59 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:06.820 06:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 [2024-10-04 06:30:59.186000] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.820 06:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.820 06:30:59 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:06.820 06:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 Malloc0 00:13:06.820 06:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.820 06:30:59 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:06.820 06:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 Delay0 00:13:06.820 06:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.820 06:30:59 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:06.820 06:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 06:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.820 06:30:59 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:06.820 06:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 06:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.820 06:30:59 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:06.820 06:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 [2024-10-04 06:30:59.257897] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.820 06:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.820 06:30:59 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.820 06:30:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.820 06:30:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.820 06:30:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.820 06:30:59 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:06.820 [2024-10-04 06:30:59.431860] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:09.362 Initializing NVMe Controllers 00:13:09.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:09.362 controller IO queue size 128 less than required 00:13:09.362 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:09.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:09.362 Initialization complete. Launching workers. 00:13:09.362 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33799 00:13:09.362 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33860, failed to submit 62 00:13:09.362 success 33799, unsuccess 61, failed 0 00:13:09.363 06:31:01 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:09.363 06:31:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.363 06:31:01 -- common/autotest_common.sh@10 -- # set +x 00:13:09.363 06:31:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.363 06:31:01 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:09.363 06:31:01 -- target/abort.sh@38 -- # nvmftestfini 00:13:09.363 06:31:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:09.363 06:31:01 -- nvmf/common.sh@116 -- # sync 00:13:09.363 06:31:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:09.363 06:31:01 -- nvmf/common.sh@119 -- # set +e 00:13:09.363 06:31:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:09.363 06:31:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:09.363 rmmod nvme_tcp 00:13:09.363 rmmod nvme_fabrics 00:13:09.363 rmmod nvme_keyring 00:13:09.363 06:31:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:09.363 06:31:01 -- nvmf/common.sh@123 -- # set -e 00:13:09.363 06:31:01 -- nvmf/common.sh@124 -- # return 0 00:13:09.363 06:31:01 -- nvmf/common.sh@477 -- # '[' -n 78681 ']' 00:13:09.363 06:31:01 -- nvmf/common.sh@478 -- # killprocess 78681 00:13:09.363 06:31:01 -- common/autotest_common.sh@926 -- # '[' -z 78681 ']' 00:13:09.363 06:31:01 -- common/autotest_common.sh@930 -- # kill -0 78681 00:13:09.363 06:31:01 -- common/autotest_common.sh@931 -- # uname 00:13:09.363 06:31:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:09.363 06:31:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78681 00:13:09.363 killing process with pid 78681 00:13:09.363 06:31:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:09.363 06:31:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:09.363 06:31:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78681' 00:13:09.363 06:31:01 -- common/autotest_common.sh@945 -- # kill 78681 00:13:09.363 06:31:01 -- common/autotest_common.sh@950 -- # wait 78681 00:13:09.363 06:31:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:09.363 06:31:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:09.363 06:31:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:09.363 06:31:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.363 06:31:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:09.363 06:31:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.363 06:31:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.363 06:31:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.363 06:31:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:09.363 00:13:09.363 real 0m4.283s 00:13:09.363 user 0m12.442s 00:13:09.363 sys 0m1.065s 00:13:09.363 06:31:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.363 ************************************ 00:13:09.363 END TEST nvmf_abort 00:13:09.363 ************************************ 00:13:09.363 06:31:01 -- common/autotest_common.sh@10 -- # set +x 00:13:09.363 06:31:01 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:09.363 06:31:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.363 06:31:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.363 06:31:01 -- common/autotest_common.sh@10 -- # set +x 00:13:09.363 ************************************ 00:13:09.363 START TEST nvmf_ns_hotplug_stress 00:13:09.363 ************************************ 00:13:09.363 06:31:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:09.622 * Looking for test storage... 00:13:09.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:09.622 06:31:02 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:09.622 06:31:02 -- nvmf/common.sh@7 -- # uname -s 00:13:09.622 06:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.622 06:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.622 06:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.622 06:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.622 06:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.622 06:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.622 06:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.622 06:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.622 06:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.622 06:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.622 06:31:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:13:09.622 06:31:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:13:09.622 06:31:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.622 06:31:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.622 06:31:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:09.622 06:31:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:09.622 06:31:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.622 06:31:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.622 06:31:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.622 06:31:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.622 06:31:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.622 06:31:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.622 06:31:02 -- paths/export.sh@5 -- # export PATH 00:13:09.622 06:31:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.622 06:31:02 -- nvmf/common.sh@46 -- # : 0 00:13:09.622 06:31:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.622 06:31:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.622 06:31:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.622 06:31:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.622 06:31:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.622 06:31:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.622 06:31:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.622 06:31:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.622 06:31:02 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:09.622 06:31:02 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:09.622 06:31:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:09.622 06:31:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.622 06:31:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:09.622 06:31:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:09.622 06:31:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:09.622 06:31:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.622 06:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.622 06:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.622 06:31:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:09.622 06:31:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:09.622 06:31:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:09.622 06:31:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:09.622 06:31:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:09.622 06:31:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:09.622 06:31:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.622 06:31:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.622 06:31:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:09.622 06:31:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:09.622 06:31:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:09.622 06:31:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:09.622 06:31:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:09.622 06:31:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.622 06:31:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:09.622 06:31:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:09.622 06:31:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:09.622 06:31:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:09.622 06:31:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:09.622 06:31:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:09.622 Cannot find device "nvmf_tgt_br" 00:13:09.622 06:31:02 -- nvmf/common.sh@154 -- # true 00:13:09.622 06:31:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.622 Cannot find device "nvmf_tgt_br2" 00:13:09.622 06:31:02 -- nvmf/common.sh@155 -- # true 00:13:09.622 06:31:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:09.622 06:31:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:09.622 Cannot find device "nvmf_tgt_br" 00:13:09.622 06:31:02 -- nvmf/common.sh@157 -- # true 00:13:09.622 06:31:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:09.622 Cannot find device "nvmf_tgt_br2" 00:13:09.622 06:31:02 -- nvmf/common.sh@158 -- # true 00:13:09.622 06:31:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:09.622 06:31:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:09.622 06:31:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.622 06:31:02 -- nvmf/common.sh@161 -- # true 00:13:09.622 06:31:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.622 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.622 06:31:02 -- nvmf/common.sh@162 -- # true 00:13:09.622 06:31:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:09.622 06:31:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:09.622 06:31:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:09.622 06:31:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:09.622 06:31:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:09.622 06:31:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:09.881 06:31:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:09.881 06:31:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:09.881 06:31:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:09.881 06:31:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:09.881 06:31:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:09.881 06:31:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:09.881 06:31:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:09.881 06:31:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:09.881 06:31:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:09.881 06:31:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:09.881 06:31:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:09.881 06:31:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:09.881 06:31:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:09.881 06:31:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:09.881 06:31:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:09.881 06:31:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:09.881 06:31:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.881 06:31:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:09.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:09.881 00:13:09.881 --- 10.0.0.2 ping statistics --- 00:13:09.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.881 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:09.881 06:31:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:09.881 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.881 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:13:09.881 00:13:09.881 --- 10.0.0.3 ping statistics --- 00:13:09.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.881 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:09.881 06:31:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:09.881 00:13:09.881 --- 10.0.0.1 ping statistics --- 00:13:09.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.881 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:09.881 06:31:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.881 06:31:02 -- nvmf/common.sh@421 -- # return 0 00:13:09.881 06:31:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:09.881 06:31:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.881 06:31:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:09.881 06:31:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:09.881 06:31:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.881 06:31:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:09.881 06:31:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:09.881 06:31:02 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:09.881 06:31:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.881 06:31:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.881 06:31:02 -- common/autotest_common.sh@10 -- # set +x 00:13:09.881 06:31:02 -- nvmf/common.sh@469 -- # nvmfpid=78942 00:13:09.881 06:31:02 -- nvmf/common.sh@470 -- # waitforlisten 78942 00:13:09.881 06:31:02 -- common/autotest_common.sh@819 -- # '[' -z 78942 ']' 00:13:09.881 06:31:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.881 06:31:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.881 06:31:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.881 06:31:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:09.881 06:31:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.881 06:31:02 -- common/autotest_common.sh@10 -- # set +x 00:13:09.881 [2024-10-04 06:31:02.517211] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:13:09.881 [2024-10-04 06:31:02.517465] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.146 [2024-10-04 06:31:02.651290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:10.147 [2024-10-04 06:31:02.730044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:10.147 [2024-10-04 06:31:02.730577] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.147 [2024-10-04 06:31:02.730710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.147 [2024-10-04 06:31:02.730870] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.147 [2024-10-04 06:31:02.731177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.147 [2024-10-04 06:31:02.731259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.147 [2024-10-04 06:31:02.731266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.080 06:31:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:11.080 06:31:03 -- common/autotest_common.sh@852 -- # return 0 00:13:11.080 06:31:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:11.080 06:31:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:11.080 06:31:03 -- common/autotest_common.sh@10 -- # set +x 00:13:11.080 06:31:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.080 06:31:03 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:11.080 06:31:03 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:11.080 [2024-10-04 06:31:03.714043] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.080 06:31:03 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:11.646 06:31:04 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.646 [2024-10-04 06:31:04.231924] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.646 06:31:04 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:11.904 06:31:04 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:12.162 Malloc0 00:13:12.419 06:31:04 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:12.419 Delay0 00:13:12.419 06:31:05 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.985 06:31:05 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:12.985 NULL1 00:13:12.985 06:31:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:13.259 06:31:05 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:13.259 06:31:05 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79074 00:13:13.259 06:31:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:13.259 06:31:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.645 Read completed with error (sct=0, sc=11) 00:13:14.645 06:31:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.645 06:31:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:14.645 06:31:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:14.903 true 00:13:14.903 06:31:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:14.903 06:31:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.838 06:31:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.097 06:31:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:16.097 06:31:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:16.097 true 00:13:16.097 06:31:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:16.097 06:31:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.356 06:31:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.615 06:31:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:16.615 06:31:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:16.873 true 00:13:16.873 06:31:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:16.873 06:31:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.807 06:31:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.066 06:31:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:18.066 06:31:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:18.325 true 00:13:18.325 06:31:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:18.325 06:31:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.584 06:31:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.584 06:31:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:18.584 06:31:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:18.842 true 00:13:18.842 06:31:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:18.842 06:31:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.776 06:31:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.035 06:31:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:20.035 06:31:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:20.294 true 00:13:20.294 06:31:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:20.294 06:31:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.553 06:31:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.813 06:31:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:20.813 06:31:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:21.072 true 00:13:21.072 06:31:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:21.072 06:31:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.008 06:31:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.008 06:31:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:22.008 06:31:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:22.267 true 00:13:22.267 06:31:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:22.267 06:31:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.525 06:31:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.784 06:31:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:22.784 06:31:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:23.043 true 00:13:23.043 06:31:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:23.043 06:31:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.980 06:31:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.980 06:31:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:23.980 06:31:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:24.239 true 00:13:24.239 06:31:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:24.239 06:31:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.497 06:31:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.757 06:31:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:24.757 06:31:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:25.016 true 00:13:25.016 06:31:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:25.016 06:31:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.952 06:31:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.211 06:31:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:26.211 06:31:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:26.211 true 00:13:26.211 06:31:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:26.211 06:31:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.790 06:31:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.790 06:31:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:26.790 06:31:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:27.062 true 00:13:27.062 06:31:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:27.062 06:31:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.322 06:31:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.581 06:31:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:27.581 06:31:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:27.839 true 00:13:27.839 06:31:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:27.840 06:31:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.775 06:31:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.034 06:31:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:29.034 06:31:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:29.292 true 00:13:29.292 06:31:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:29.292 06:31:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.227 06:31:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.227 06:31:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:30.228 06:31:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:30.486 true 00:13:30.486 06:31:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:30.486 06:31:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.745 06:31:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.004 06:31:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:31.004 06:31:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:31.263 true 00:13:31.263 06:31:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:31.263 06:31:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.241 06:31:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.500 06:31:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:32.500 06:31:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:32.500 true 00:13:32.500 06:31:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:32.500 06:31:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.759 06:31:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.017 06:31:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:33.017 06:31:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:33.275 true 00:13:33.275 06:31:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:33.275 06:31:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.210 06:31:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.469 06:31:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:34.469 06:31:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:34.469 true 00:13:34.469 06:31:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:34.469 06:31:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.729 06:31:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.988 06:31:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:34.988 06:31:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:35.247 true 00:13:35.247 06:31:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:35.247 06:31:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.184 06:31:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.184 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:36.442 06:31:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:36.442 06:31:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:36.700 true 00:13:36.700 06:31:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:36.700 06:31:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.700 06:31:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.959 06:31:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:36.959 06:31:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:37.217 true 00:13:37.217 06:31:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:37.217 06:31:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.151 06:31:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.410 06:31:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:38.410 06:31:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:38.668 true 00:13:38.668 06:31:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:38.668 06:31:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.927 06:31:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.185 06:31:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:39.185 06:31:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:39.444 true 00:13:39.444 06:31:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:39.444 06:31:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.702 06:31:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.961 06:31:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:39.961 06:31:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:39.961 true 00:13:39.961 06:31:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:39.961 06:31:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.339 06:31:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.339 06:31:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:41.339 06:31:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:41.598 true 00:13:41.598 06:31:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:41.598 06:31:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.534 06:31:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.793 06:31:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:42.793 06:31:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:42.793 true 00:13:42.793 06:31:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:42.793 06:31:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.052 06:31:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.311 06:31:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:43.311 06:31:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:43.570 true 00:13:43.570 06:31:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:43.570 06:31:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.507 Initializing NVMe Controllers 00:13:44.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.507 Controller IO queue size 128, less than required. 00:13:44.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.507 Controller IO queue size 128, less than required. 00:13:44.507 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:44.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:44.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:44.507 Initialization complete. Launching workers. 00:13:44.507 ======================================================== 00:13:44.507 Latency(us) 00:13:44.507 Device Information : IOPS MiB/s Average min max 00:13:44.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 864.10 0.42 86599.66 2201.62 1121992.72 00:13:44.507 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14199.14 6.93 9014.59 2610.62 542054.10 00:13:44.507 ======================================================== 00:13:44.507 Total : 15063.24 7.36 13465.23 2201.62 1121992.72 00:13:44.507 00:13:44.507 06:31:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.766 06:31:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:44.766 06:31:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:45.024 true 00:13:45.024 06:31:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79074 00:13:45.024 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79074) - No such process 00:13:45.024 06:31:37 -- target/ns_hotplug_stress.sh@53 -- # wait 79074 00:13:45.024 06:31:37 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.282 06:31:37 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.541 06:31:37 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:45.541 06:31:37 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:45.541 06:31:37 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:45.541 06:31:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.541 06:31:37 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:45.541 null0 00:13:45.541 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:45.541 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:45.541 06:31:38 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:45.800 null1 00:13:46.059 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.059 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.059 06:31:38 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:46.059 null2 00:13:46.059 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.059 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.059 06:31:38 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:46.318 null3 00:13:46.318 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.318 06:31:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.318 06:31:38 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:46.577 null4 00:13:46.577 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.577 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.578 06:31:39 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:46.837 null5 00:13:46.837 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:46.837 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:46.837 06:31:39 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:47.096 null6 00:13:47.096 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.096 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.096 06:31:39 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:47.355 null7 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.355 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@66 -- # wait 80137 80138 80140 80143 80144 80146 80148 80149 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.356 06:31:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:47.615 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.873 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.132 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.391 06:31:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.391 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:48.649 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:48.908 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.168 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.428 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.428 06:31:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.428 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.428 06:31:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.428 06:31:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.428 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.687 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.688 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.947 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.206 06:31:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.466 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.466 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.466 06:31:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.466 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.726 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.985 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.244 06:31:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.503 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.503 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.503 06:31:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.503 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.503 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.503 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.503 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.503 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.503 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.503 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.762 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.020 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.343 06:31:44 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.657 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:52.916 06:31:45 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:52.916 06:31:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:52.916 06:31:45 -- nvmf/common.sh@116 -- # sync 00:13:52.916 06:31:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:52.916 06:31:45 -- nvmf/common.sh@119 -- # set +e 00:13:52.916 06:31:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:52.916 06:31:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:52.916 rmmod nvme_tcp 00:13:52.916 rmmod nvme_fabrics 00:13:52.916 rmmod nvme_keyring 00:13:52.916 06:31:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:52.916 06:31:45 -- nvmf/common.sh@123 -- # set -e 00:13:52.916 06:31:45 -- nvmf/common.sh@124 -- # return 0 00:13:52.916 06:31:45 -- nvmf/common.sh@477 -- # '[' -n 78942 ']' 00:13:52.916 06:31:45 -- nvmf/common.sh@478 -- # killprocess 78942 00:13:52.916 06:31:45 -- common/autotest_common.sh@926 -- # '[' -z 78942 ']' 00:13:52.916 06:31:45 -- common/autotest_common.sh@930 -- # kill -0 78942 00:13:52.916 06:31:45 -- common/autotest_common.sh@931 -- # uname 00:13:52.916 06:31:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:52.916 06:31:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78942 00:13:52.916 06:31:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:52.916 06:31:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:52.916 killing process with pid 78942 00:13:52.916 06:31:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78942' 00:13:52.916 06:31:45 -- common/autotest_common.sh@945 -- # kill 78942 00:13:52.916 06:31:45 -- common/autotest_common.sh@950 -- # wait 78942 00:13:53.175 06:31:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:53.175 06:31:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:53.175 06:31:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:53.175 06:31:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.175 06:31:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:53.175 06:31:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.175 06:31:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.175 06:31:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.434 06:31:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:53.434 00:13:53.434 real 0m43.863s 00:13:53.434 user 3m30.352s 00:13:53.434 sys 0m12.120s 00:13:53.434 06:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.434 06:31:45 -- common/autotest_common.sh@10 -- # set +x 00:13:53.434 ************************************ 00:13:53.434 END TEST nvmf_ns_hotplug_stress 00:13:53.434 ************************************ 00:13:53.434 06:31:45 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:53.434 06:31:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:53.434 06:31:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:53.434 06:31:45 -- common/autotest_common.sh@10 -- # set +x 00:13:53.434 ************************************ 00:13:53.434 START TEST nvmf_connect_stress 00:13:53.434 ************************************ 00:13:53.434 06:31:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:53.434 * Looking for test storage... 00:13:53.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:53.434 06:31:45 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:53.434 06:31:45 -- nvmf/common.sh@7 -- # uname -s 00:13:53.434 06:31:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.434 06:31:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.434 06:31:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.434 06:31:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.434 06:31:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.434 06:31:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.434 06:31:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.434 06:31:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.434 06:31:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.434 06:31:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.434 06:31:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:13:53.434 06:31:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:13:53.434 06:31:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.434 06:31:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.434 06:31:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:53.434 06:31:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.434 06:31:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.434 06:31:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.434 06:31:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.434 06:31:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.434 06:31:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.434 06:31:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.434 06:31:46 -- paths/export.sh@5 -- # export PATH 00:13:53.435 06:31:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.435 06:31:46 -- nvmf/common.sh@46 -- # : 0 00:13:53.435 06:31:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:53.435 06:31:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:53.435 06:31:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:53.435 06:31:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.435 06:31:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.435 06:31:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:53.435 06:31:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:53.435 06:31:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:53.435 06:31:46 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:53.435 06:31:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:53.435 06:31:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.435 06:31:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:53.435 06:31:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:53.435 06:31:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:53.435 06:31:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.435 06:31:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.435 06:31:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.435 06:31:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:53.435 06:31:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:53.435 06:31:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:53.435 06:31:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:53.435 06:31:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:53.435 06:31:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:53.435 06:31:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.435 06:31:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.435 06:31:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:53.435 06:31:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:53.435 06:31:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:53.435 06:31:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:53.435 06:31:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:53.435 06:31:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.435 06:31:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:53.435 06:31:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:53.435 06:31:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:53.435 06:31:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:53.435 06:31:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:53.435 06:31:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:53.435 Cannot find device "nvmf_tgt_br" 00:13:53.435 06:31:46 -- nvmf/common.sh@154 -- # true 00:13:53.435 06:31:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:53.435 Cannot find device "nvmf_tgt_br2" 00:13:53.435 06:31:46 -- nvmf/common.sh@155 -- # true 00:13:53.435 06:31:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:53.435 06:31:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:53.435 Cannot find device "nvmf_tgt_br" 00:13:53.435 06:31:46 -- nvmf/common.sh@157 -- # true 00:13:53.435 06:31:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:53.435 Cannot find device "nvmf_tgt_br2" 00:13:53.435 06:31:46 -- nvmf/common.sh@158 -- # true 00:13:53.435 06:31:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:53.693 06:31:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:53.693 06:31:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:53.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:53.693 06:31:46 -- nvmf/common.sh@161 -- # true 00:13:53.693 06:31:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:53.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:53.693 06:31:46 -- nvmf/common.sh@162 -- # true 00:13:53.693 06:31:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:53.693 06:31:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:53.693 06:31:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:53.693 06:31:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:53.693 06:31:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:53.693 06:31:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:53.693 06:31:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:53.693 06:31:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:53.693 06:31:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:53.693 06:31:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:53.693 06:31:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:53.693 06:31:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:53.693 06:31:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:53.693 06:31:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:53.693 06:31:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:53.693 06:31:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:53.693 06:31:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:53.693 06:31:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:53.693 06:31:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:53.693 06:31:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:53.693 06:31:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:53.693 06:31:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:53.693 06:31:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:53.693 06:31:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:53.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:13:53.693 00:13:53.693 --- 10.0.0.2 ping statistics --- 00:13:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.693 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:53.693 06:31:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:53.693 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:53.693 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:53.693 00:13:53.693 --- 10.0.0.3 ping statistics --- 00:13:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.693 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:53.693 06:31:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:53.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:53.693 00:13:53.693 --- 10.0.0.1 ping statistics --- 00:13:53.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.693 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:53.693 06:31:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.693 06:31:46 -- nvmf/common.sh@421 -- # return 0 00:13:53.693 06:31:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:53.693 06:31:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.693 06:31:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:53.693 06:31:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:53.693 06:31:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.693 06:31:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:53.693 06:31:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:53.693 06:31:46 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:53.693 06:31:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:53.693 06:31:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:53.693 06:31:46 -- common/autotest_common.sh@10 -- # set +x 00:13:53.693 06:31:46 -- nvmf/common.sh@469 -- # nvmfpid=81476 00:13:53.693 06:31:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:53.693 06:31:46 -- nvmf/common.sh@470 -- # waitforlisten 81476 00:13:53.693 06:31:46 -- common/autotest_common.sh@819 -- # '[' -z 81476 ']' 00:13:53.693 06:31:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.693 06:31:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:53.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.693 06:31:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.693 06:31:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:53.693 06:31:46 -- common/autotest_common.sh@10 -- # set +x 00:13:53.952 [2024-10-04 06:31:46.409015] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:13:53.952 [2024-10-04 06:31:46.409095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.952 [2024-10-04 06:31:46.542295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:53.952 [2024-10-04 06:31:46.623347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:53.952 [2024-10-04 06:31:46.623793] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.952 [2024-10-04 06:31:46.623997] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.952 [2024-10-04 06:31:46.624224] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.952 [2024-10-04 06:31:46.624499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.952 [2024-10-04 06:31:46.624572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.952 [2024-10-04 06:31:46.624575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.887 06:31:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:54.887 06:31:47 -- common/autotest_common.sh@852 -- # return 0 00:13:54.887 06:31:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:54.887 06:31:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:54.887 06:31:47 -- common/autotest_common.sh@10 -- # set +x 00:13:54.887 06:31:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.887 06:31:47 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.887 06:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.887 06:31:47 -- common/autotest_common.sh@10 -- # set +x 00:13:54.887 [2024-10-04 06:31:47.429251] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.887 06:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.887 06:31:47 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.887 06:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.887 06:31:47 -- common/autotest_common.sh@10 -- # set +x 00:13:54.887 06:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.887 06:31:47 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.887 06:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.887 06:31:47 -- common/autotest_common.sh@10 -- # set +x 00:13:54.887 [2024-10-04 06:31:47.451329] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.887 06:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.887 06:31:47 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:54.887 06:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.887 06:31:47 -- common/autotest_common.sh@10 -- # set +x 00:13:54.887 NULL1 00:13:54.887 06:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.887 06:31:47 -- target/connect_stress.sh@21 -- # PERF_PID=81528 00:13:54.888 06:31:47 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:54.888 06:31:47 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:54.888 06:31:47 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:54.888 06:31:47 -- target/connect_stress.sh@28 -- # cat 00:13:54.888 06:31:47 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:54.888 06:31:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.888 06:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.888 06:31:47 -- common/autotest_common.sh@10 -- # set +x 00:13:55.456 06:31:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.456 06:31:47 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:55.456 06:31:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.456 06:31:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.456 06:31:47 -- common/autotest_common.sh@10 -- # set +x 00:13:55.714 06:31:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.714 06:31:48 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:55.714 06:31:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.714 06:31:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.714 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.972 06:31:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.972 06:31:48 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:55.972 06:31:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.972 06:31:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.972 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:13:56.231 06:31:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.231 06:31:48 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:56.231 06:31:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.231 06:31:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.231 06:31:48 -- common/autotest_common.sh@10 -- # set +x 00:13:56.798 06:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.798 06:31:49 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:56.798 06:31:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.798 06:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.798 06:31:49 -- common/autotest_common.sh@10 -- # set +x 00:13:57.056 06:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.056 06:31:49 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:57.056 06:31:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.056 06:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.056 06:31:49 -- common/autotest_common.sh@10 -- # set +x 00:13:57.315 06:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.315 06:31:49 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:57.315 06:31:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.315 06:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.315 06:31:49 -- common/autotest_common.sh@10 -- # set +x 00:13:57.574 06:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.574 06:31:50 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:57.574 06:31:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.574 06:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.574 06:31:50 -- common/autotest_common.sh@10 -- # set +x 00:13:57.833 06:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.833 06:31:50 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:57.833 06:31:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.833 06:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.833 06:31:50 -- common/autotest_common.sh@10 -- # set +x 00:13:58.400 06:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.400 06:31:50 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:58.400 06:31:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.400 06:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.400 06:31:50 -- common/autotest_common.sh@10 -- # set +x 00:13:58.658 06:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.658 06:31:51 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:58.658 06:31:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.658 06:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.658 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:13:58.917 06:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.917 06:31:51 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:58.917 06:31:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.917 06:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.917 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:13:59.176 06:31:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.176 06:31:51 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:59.176 06:31:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.176 06:31:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.176 06:31:51 -- common/autotest_common.sh@10 -- # set +x 00:13:59.434 06:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.434 06:31:52 -- target/connect_stress.sh@34 -- # kill -0 81528 00:13:59.434 06:31:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.434 06:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.434 06:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:00.000 06:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.000 06:31:52 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:00.000 06:31:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.000 06:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.000 06:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:00.258 06:31:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.258 06:31:52 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:00.258 06:31:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.258 06:31:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.258 06:31:52 -- common/autotest_common.sh@10 -- # set +x 00:14:00.515 06:31:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.515 06:31:53 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:00.515 06:31:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.515 06:31:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.515 06:31:53 -- common/autotest_common.sh@10 -- # set +x 00:14:00.789 06:31:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.789 06:31:53 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:00.789 06:31:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.789 06:31:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.789 06:31:53 -- common/autotest_common.sh@10 -- # set +x 00:14:01.047 06:31:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.047 06:31:53 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:01.047 06:31:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.047 06:31:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.047 06:31:53 -- common/autotest_common.sh@10 -- # set +x 00:14:01.612 06:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.612 06:31:54 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:01.612 06:31:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.612 06:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.612 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:01.870 06:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.870 06:31:54 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:01.870 06:31:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.870 06:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.870 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:02.130 06:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.130 06:31:54 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:02.130 06:31:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.130 06:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.130 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:02.389 06:31:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.389 06:31:54 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:02.389 06:31:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.389 06:31:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.389 06:31:54 -- common/autotest_common.sh@10 -- # set +x 00:14:02.646 06:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.646 06:31:55 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:02.646 06:31:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.646 06:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.646 06:31:55 -- common/autotest_common.sh@10 -- # set +x 00:14:03.212 06:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.212 06:31:55 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:03.212 06:31:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.212 06:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.212 06:31:55 -- common/autotest_common.sh@10 -- # set +x 00:14:03.471 06:31:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.471 06:31:55 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:03.471 06:31:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.471 06:31:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.471 06:31:55 -- common/autotest_common.sh@10 -- # set +x 00:14:03.729 06:31:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.729 06:31:56 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:03.729 06:31:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.729 06:31:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.729 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:14:03.987 06:31:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.987 06:31:56 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:03.987 06:31:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.987 06:31:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.987 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:14:04.575 06:31:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.575 06:31:56 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:04.575 06:31:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.575 06:31:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.575 06:31:56 -- common/autotest_common.sh@10 -- # set +x 00:14:04.575 06:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.575 06:31:57 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:04.575 06:31:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.576 06:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.576 06:31:57 -- common/autotest_common.sh@10 -- # set +x 00:14:05.142 06:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.142 06:31:57 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:05.142 06:31:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.142 06:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.142 06:31:57 -- common/autotest_common.sh@10 -- # set +x 00:14:05.142 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.400 06:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.400 06:31:57 -- target/connect_stress.sh@34 -- # kill -0 81528 00:14:05.400 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81528) - No such process 00:14:05.400 06:31:57 -- target/connect_stress.sh@38 -- # wait 81528 00:14:05.400 06:31:57 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:05.400 06:31:57 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:05.400 06:31:57 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:05.400 06:31:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.400 06:31:57 -- nvmf/common.sh@116 -- # sync 00:14:05.400 06:31:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.400 06:31:57 -- nvmf/common.sh@119 -- # set +e 00:14:05.400 06:31:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.400 06:31:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.400 rmmod nvme_tcp 00:14:05.400 rmmod nvme_fabrics 00:14:05.400 rmmod nvme_keyring 00:14:05.400 06:31:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.400 06:31:57 -- nvmf/common.sh@123 -- # set -e 00:14:05.400 06:31:57 -- nvmf/common.sh@124 -- # return 0 00:14:05.400 06:31:57 -- nvmf/common.sh@477 -- # '[' -n 81476 ']' 00:14:05.400 06:31:57 -- nvmf/common.sh@478 -- # killprocess 81476 00:14:05.400 06:31:57 -- common/autotest_common.sh@926 -- # '[' -z 81476 ']' 00:14:05.400 06:31:57 -- common/autotest_common.sh@930 -- # kill -0 81476 00:14:05.400 06:31:57 -- common/autotest_common.sh@931 -- # uname 00:14:05.400 06:31:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:05.400 06:31:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81476 00:14:05.400 06:31:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:05.400 killing process with pid 81476 00:14:05.400 06:31:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:05.400 06:31:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81476' 00:14:05.400 06:31:58 -- common/autotest_common.sh@945 -- # kill 81476 00:14:05.400 06:31:58 -- common/autotest_common.sh@950 -- # wait 81476 00:14:05.658 06:31:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:05.658 06:31:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:05.658 06:31:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:05.658 06:31:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.658 06:31:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:05.658 06:31:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.658 06:31:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.658 06:31:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.658 06:31:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:05.658 00:14:05.658 real 0m12.348s 00:14:05.658 user 0m41.443s 00:14:05.658 sys 0m3.107s 00:14:05.658 06:31:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.658 06:31:58 -- common/autotest_common.sh@10 -- # set +x 00:14:05.658 ************************************ 00:14:05.658 END TEST nvmf_connect_stress 00:14:05.658 ************************************ 00:14:05.658 06:31:58 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:05.658 06:31:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:05.658 06:31:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.658 06:31:58 -- common/autotest_common.sh@10 -- # set +x 00:14:05.658 ************************************ 00:14:05.658 START TEST nvmf_fused_ordering 00:14:05.658 ************************************ 00:14:05.658 06:31:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:05.917 * Looking for test storage... 00:14:05.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:05.917 06:31:58 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.917 06:31:58 -- nvmf/common.sh@7 -- # uname -s 00:14:05.917 06:31:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.917 06:31:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.917 06:31:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.917 06:31:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.917 06:31:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.917 06:31:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.917 06:31:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.917 06:31:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.917 06:31:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.917 06:31:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.917 06:31:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:05.917 06:31:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:05.917 06:31:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.917 06:31:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.917 06:31:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.917 06:31:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.917 06:31:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.918 06:31:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.918 06:31:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.918 06:31:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.918 06:31:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.918 06:31:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.918 06:31:58 -- paths/export.sh@5 -- # export PATH 00:14:05.918 06:31:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.918 06:31:58 -- nvmf/common.sh@46 -- # : 0 00:14:05.918 06:31:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:05.918 06:31:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:05.918 06:31:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:05.918 06:31:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.918 06:31:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.918 06:31:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:05.918 06:31:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:05.918 06:31:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:05.918 06:31:58 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:05.918 06:31:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:05.918 06:31:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.918 06:31:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:05.918 06:31:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:05.918 06:31:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:05.918 06:31:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.918 06:31:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.918 06:31:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.918 06:31:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:05.918 06:31:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:05.918 06:31:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:05.918 06:31:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:05.918 06:31:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:05.918 06:31:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:05.918 06:31:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.918 06:31:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.918 06:31:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.918 06:31:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:05.918 06:31:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.918 06:31:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.918 06:31:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.918 06:31:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.918 06:31:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.918 06:31:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.918 06:31:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.918 06:31:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.918 06:31:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:05.918 06:31:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:05.918 Cannot find device "nvmf_tgt_br" 00:14:05.918 06:31:58 -- nvmf/common.sh@154 -- # true 00:14:05.918 06:31:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.918 Cannot find device "nvmf_tgt_br2" 00:14:05.918 06:31:58 -- nvmf/common.sh@155 -- # true 00:14:05.918 06:31:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:05.918 06:31:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:05.918 Cannot find device "nvmf_tgt_br" 00:14:05.918 06:31:58 -- nvmf/common.sh@157 -- # true 00:14:05.918 06:31:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:05.918 Cannot find device "nvmf_tgt_br2" 00:14:05.918 06:31:58 -- nvmf/common.sh@158 -- # true 00:14:05.918 06:31:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:05.918 06:31:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:05.918 06:31:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.918 06:31:58 -- nvmf/common.sh@161 -- # true 00:14:05.918 06:31:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.918 06:31:58 -- nvmf/common.sh@162 -- # true 00:14:05.918 06:31:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.918 06:31:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.918 06:31:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.918 06:31:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.918 06:31:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.177 06:31:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.177 06:31:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.177 06:31:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.177 06:31:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.177 06:31:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:06.177 06:31:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:06.177 06:31:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:06.177 06:31:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:06.177 06:31:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.177 06:31:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.177 06:31:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.177 06:31:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:06.177 06:31:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:06.177 06:31:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.177 06:31:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.177 06:31:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.177 06:31:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.177 06:31:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.177 06:31:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:06.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:14:06.177 00:14:06.177 --- 10.0.0.2 ping statistics --- 00:14:06.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.177 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:06.177 06:31:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:06.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:06.177 00:14:06.177 --- 10.0.0.3 ping statistics --- 00:14:06.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.177 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:06.177 06:31:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:06.177 00:14:06.177 --- 10.0.0.1 ping statistics --- 00:14:06.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.177 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:06.177 06:31:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.177 06:31:58 -- nvmf/common.sh@421 -- # return 0 00:14:06.177 06:31:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.177 06:31:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.177 06:31:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.177 06:31:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.177 06:31:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.177 06:31:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.177 06:31:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.177 06:31:58 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:06.177 06:31:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.177 06:31:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:06.177 06:31:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.177 06:31:58 -- nvmf/common.sh@469 -- # nvmfpid=81854 00:14:06.177 06:31:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:06.177 06:31:58 -- nvmf/common.sh@470 -- # waitforlisten 81854 00:14:06.177 06:31:58 -- common/autotest_common.sh@819 -- # '[' -z 81854 ']' 00:14:06.177 06:31:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.177 06:31:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:06.177 06:31:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.177 06:31:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:06.177 06:31:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.436 [2024-10-04 06:31:58.859443] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:06.436 [2024-10-04 06:31:58.859535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.436 [2024-10-04 06:31:59.000597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.436 [2024-10-04 06:31:59.066217] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.436 [2024-10-04 06:31:59.066393] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.436 [2024-10-04 06:31:59.066410] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.436 [2024-10-04 06:31:59.066421] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.436 [2024-10-04 06:31:59.066473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.373 06:31:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.373 06:31:59 -- common/autotest_common.sh@852 -- # return 0 00:14:07.373 06:31:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.373 06:31:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:07.373 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 06:31:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.373 06:31:59 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.373 06:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.373 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 [2024-10-04 06:31:59.954734] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.373 06:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.373 06:31:59 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.373 06:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.373 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 06:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.373 06:31:59 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.373 06:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.373 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 [2024-10-04 06:31:59.970885] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.373 06:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.373 06:31:59 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:07.373 06:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.373 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 NULL1 00:14:07.373 06:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.373 06:31:59 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:07.373 06:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.373 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 06:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.373 06:31:59 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:07.373 06:31:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.373 06:31:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 06:31:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.373 06:31:59 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:07.373 [2024-10-04 06:32:00.021703] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:07.373 [2024-10-04 06:32:00.021752] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81904 ] 00:14:07.940 Attached to nqn.2016-06.io.spdk:cnode1 00:14:07.940 Namespace ID: 1 size: 1GB 00:14:07.940 fused_ordering(0) 00:14:07.940 fused_ordering(1) 00:14:07.940 fused_ordering(2) 00:14:07.940 fused_ordering(3) 00:14:07.940 fused_ordering(4) 00:14:07.940 fused_ordering(5) 00:14:07.940 fused_ordering(6) 00:14:07.940 fused_ordering(7) 00:14:07.940 fused_ordering(8) 00:14:07.940 fused_ordering(9) 00:14:07.940 fused_ordering(10) 00:14:07.940 fused_ordering(11) 00:14:07.940 fused_ordering(12) 00:14:07.940 fused_ordering(13) 00:14:07.940 fused_ordering(14) 00:14:07.940 fused_ordering(15) 00:14:07.940 fused_ordering(16) 00:14:07.940 fused_ordering(17) 00:14:07.940 fused_ordering(18) 00:14:07.940 fused_ordering(19) 00:14:07.940 fused_ordering(20) 00:14:07.940 fused_ordering(21) 00:14:07.940 fused_ordering(22) 00:14:07.940 fused_ordering(23) 00:14:07.940 fused_ordering(24) 00:14:07.940 fused_ordering(25) 00:14:07.940 fused_ordering(26) 00:14:07.940 fused_ordering(27) 00:14:07.940 fused_ordering(28) 00:14:07.940 fused_ordering(29) 00:14:07.940 fused_ordering(30) 00:14:07.940 fused_ordering(31) 00:14:07.940 fused_ordering(32) 00:14:07.940 fused_ordering(33) 00:14:07.940 fused_ordering(34) 00:14:07.940 fused_ordering(35) 00:14:07.940 fused_ordering(36) 00:14:07.940 fused_ordering(37) 00:14:07.940 fused_ordering(38) 00:14:07.940 fused_ordering(39) 00:14:07.940 fused_ordering(40) 00:14:07.940 fused_ordering(41) 00:14:07.940 fused_ordering(42) 00:14:07.940 fused_ordering(43) 00:14:07.940 fused_ordering(44) 00:14:07.940 fused_ordering(45) 00:14:07.940 fused_ordering(46) 00:14:07.940 fused_ordering(47) 00:14:07.940 fused_ordering(48) 00:14:07.940 fused_ordering(49) 00:14:07.940 fused_ordering(50) 00:14:07.940 fused_ordering(51) 00:14:07.940 fused_ordering(52) 00:14:07.940 fused_ordering(53) 00:14:07.940 fused_ordering(54) 00:14:07.940 fused_ordering(55) 00:14:07.940 fused_ordering(56) 00:14:07.940 fused_ordering(57) 00:14:07.940 fused_ordering(58) 00:14:07.940 fused_ordering(59) 00:14:07.940 fused_ordering(60) 00:14:07.940 fused_ordering(61) 00:14:07.940 fused_ordering(62) 00:14:07.940 fused_ordering(63) 00:14:07.940 fused_ordering(64) 00:14:07.940 fused_ordering(65) 00:14:07.940 fused_ordering(66) 00:14:07.940 fused_ordering(67) 00:14:07.940 fused_ordering(68) 00:14:07.940 fused_ordering(69) 00:14:07.940 fused_ordering(70) 00:14:07.940 fused_ordering(71) 00:14:07.940 fused_ordering(72) 00:14:07.940 fused_ordering(73) 00:14:07.940 fused_ordering(74) 00:14:07.940 fused_ordering(75) 00:14:07.940 fused_ordering(76) 00:14:07.940 fused_ordering(77) 00:14:07.940 fused_ordering(78) 00:14:07.940 fused_ordering(79) 00:14:07.940 fused_ordering(80) 00:14:07.940 fused_ordering(81) 00:14:07.940 fused_ordering(82) 00:14:07.940 fused_ordering(83) 00:14:07.940 fused_ordering(84) 00:14:07.940 fused_ordering(85) 00:14:07.940 fused_ordering(86) 00:14:07.940 fused_ordering(87) 00:14:07.940 fused_ordering(88) 00:14:07.940 fused_ordering(89) 00:14:07.940 fused_ordering(90) 00:14:07.940 fused_ordering(91) 00:14:07.940 fused_ordering(92) 00:14:07.940 fused_ordering(93) 00:14:07.940 fused_ordering(94) 00:14:07.940 fused_ordering(95) 00:14:07.940 fused_ordering(96) 00:14:07.940 fused_ordering(97) 00:14:07.940 fused_ordering(98) 00:14:07.940 fused_ordering(99) 00:14:07.940 fused_ordering(100) 00:14:07.940 fused_ordering(101) 00:14:07.940 fused_ordering(102) 00:14:07.940 fused_ordering(103) 00:14:07.940 fused_ordering(104) 00:14:07.940 fused_ordering(105) 00:14:07.940 fused_ordering(106) 00:14:07.940 fused_ordering(107) 00:14:07.940 fused_ordering(108) 00:14:07.940 fused_ordering(109) 00:14:07.940 fused_ordering(110) 00:14:07.940 fused_ordering(111) 00:14:07.940 fused_ordering(112) 00:14:07.940 fused_ordering(113) 00:14:07.940 fused_ordering(114) 00:14:07.940 fused_ordering(115) 00:14:07.940 fused_ordering(116) 00:14:07.940 fused_ordering(117) 00:14:07.940 fused_ordering(118) 00:14:07.940 fused_ordering(119) 00:14:07.940 fused_ordering(120) 00:14:07.940 fused_ordering(121) 00:14:07.940 fused_ordering(122) 00:14:07.940 fused_ordering(123) 00:14:07.940 fused_ordering(124) 00:14:07.940 fused_ordering(125) 00:14:07.940 fused_ordering(126) 00:14:07.940 fused_ordering(127) 00:14:07.940 fused_ordering(128) 00:14:07.940 fused_ordering(129) 00:14:07.940 fused_ordering(130) 00:14:07.940 fused_ordering(131) 00:14:07.940 fused_ordering(132) 00:14:07.940 fused_ordering(133) 00:14:07.940 fused_ordering(134) 00:14:07.940 fused_ordering(135) 00:14:07.941 fused_ordering(136) 00:14:07.941 fused_ordering(137) 00:14:07.941 fused_ordering(138) 00:14:07.941 fused_ordering(139) 00:14:07.941 fused_ordering(140) 00:14:07.941 fused_ordering(141) 00:14:07.941 fused_ordering(142) 00:14:07.941 fused_ordering(143) 00:14:07.941 fused_ordering(144) 00:14:07.941 fused_ordering(145) 00:14:07.941 fused_ordering(146) 00:14:07.941 fused_ordering(147) 00:14:07.941 fused_ordering(148) 00:14:07.941 fused_ordering(149) 00:14:07.941 fused_ordering(150) 00:14:07.941 fused_ordering(151) 00:14:07.941 fused_ordering(152) 00:14:07.941 fused_ordering(153) 00:14:07.941 fused_ordering(154) 00:14:07.941 fused_ordering(155) 00:14:07.941 fused_ordering(156) 00:14:07.941 fused_ordering(157) 00:14:07.941 fused_ordering(158) 00:14:07.941 fused_ordering(159) 00:14:07.941 fused_ordering(160) 00:14:07.941 fused_ordering(161) 00:14:07.941 fused_ordering(162) 00:14:07.941 fused_ordering(163) 00:14:07.941 fused_ordering(164) 00:14:07.941 fused_ordering(165) 00:14:07.941 fused_ordering(166) 00:14:07.941 fused_ordering(167) 00:14:07.941 fused_ordering(168) 00:14:07.941 fused_ordering(169) 00:14:07.941 fused_ordering(170) 00:14:07.941 fused_ordering(171) 00:14:07.941 fused_ordering(172) 00:14:07.941 fused_ordering(173) 00:14:07.941 fused_ordering(174) 00:14:07.941 fused_ordering(175) 00:14:07.941 fused_ordering(176) 00:14:07.941 fused_ordering(177) 00:14:07.941 fused_ordering(178) 00:14:07.941 fused_ordering(179) 00:14:07.941 fused_ordering(180) 00:14:07.941 fused_ordering(181) 00:14:07.941 fused_ordering(182) 00:14:07.941 fused_ordering(183) 00:14:07.941 fused_ordering(184) 00:14:07.941 fused_ordering(185) 00:14:07.941 fused_ordering(186) 00:14:07.941 fused_ordering(187) 00:14:07.941 fused_ordering(188) 00:14:07.941 fused_ordering(189) 00:14:07.941 fused_ordering(190) 00:14:07.941 fused_ordering(191) 00:14:07.941 fused_ordering(192) 00:14:07.941 fused_ordering(193) 00:14:07.941 fused_ordering(194) 00:14:07.941 fused_ordering(195) 00:14:07.941 fused_ordering(196) 00:14:07.941 fused_ordering(197) 00:14:07.941 fused_ordering(198) 00:14:07.941 fused_ordering(199) 00:14:07.941 fused_ordering(200) 00:14:07.941 fused_ordering(201) 00:14:07.941 fused_ordering(202) 00:14:07.941 fused_ordering(203) 00:14:07.941 fused_ordering(204) 00:14:07.941 fused_ordering(205) 00:14:08.200 fused_ordering(206) 00:14:08.200 fused_ordering(207) 00:14:08.200 fused_ordering(208) 00:14:08.200 fused_ordering(209) 00:14:08.200 fused_ordering(210) 00:14:08.200 fused_ordering(211) 00:14:08.200 fused_ordering(212) 00:14:08.200 fused_ordering(213) 00:14:08.200 fused_ordering(214) 00:14:08.200 fused_ordering(215) 00:14:08.200 fused_ordering(216) 00:14:08.200 fused_ordering(217) 00:14:08.200 fused_ordering(218) 00:14:08.200 fused_ordering(219) 00:14:08.200 fused_ordering(220) 00:14:08.200 fused_ordering(221) 00:14:08.200 fused_ordering(222) 00:14:08.200 fused_ordering(223) 00:14:08.200 fused_ordering(224) 00:14:08.200 fused_ordering(225) 00:14:08.200 fused_ordering(226) 00:14:08.200 fused_ordering(227) 00:14:08.200 fused_ordering(228) 00:14:08.200 fused_ordering(229) 00:14:08.200 fused_ordering(230) 00:14:08.200 fused_ordering(231) 00:14:08.200 fused_ordering(232) 00:14:08.200 fused_ordering(233) 00:14:08.200 fused_ordering(234) 00:14:08.200 fused_ordering(235) 00:14:08.200 fused_ordering(236) 00:14:08.200 fused_ordering(237) 00:14:08.200 fused_ordering(238) 00:14:08.200 fused_ordering(239) 00:14:08.200 fused_ordering(240) 00:14:08.200 fused_ordering(241) 00:14:08.200 fused_ordering(242) 00:14:08.200 fused_ordering(243) 00:14:08.200 fused_ordering(244) 00:14:08.200 fused_ordering(245) 00:14:08.200 fused_ordering(246) 00:14:08.200 fused_ordering(247) 00:14:08.200 fused_ordering(248) 00:14:08.200 fused_ordering(249) 00:14:08.200 fused_ordering(250) 00:14:08.200 fused_ordering(251) 00:14:08.200 fused_ordering(252) 00:14:08.200 fused_ordering(253) 00:14:08.200 fused_ordering(254) 00:14:08.200 fused_ordering(255) 00:14:08.200 fused_ordering(256) 00:14:08.200 fused_ordering(257) 00:14:08.200 fused_ordering(258) 00:14:08.200 fused_ordering(259) 00:14:08.200 fused_ordering(260) 00:14:08.200 fused_ordering(261) 00:14:08.200 fused_ordering(262) 00:14:08.200 fused_ordering(263) 00:14:08.200 fused_ordering(264) 00:14:08.200 fused_ordering(265) 00:14:08.200 fused_ordering(266) 00:14:08.200 fused_ordering(267) 00:14:08.200 fused_ordering(268) 00:14:08.200 fused_ordering(269) 00:14:08.200 fused_ordering(270) 00:14:08.200 fused_ordering(271) 00:14:08.200 fused_ordering(272) 00:14:08.200 fused_ordering(273) 00:14:08.200 fused_ordering(274) 00:14:08.200 fused_ordering(275) 00:14:08.200 fused_ordering(276) 00:14:08.200 fused_ordering(277) 00:14:08.200 fused_ordering(278) 00:14:08.200 fused_ordering(279) 00:14:08.200 fused_ordering(280) 00:14:08.200 fused_ordering(281) 00:14:08.200 fused_ordering(282) 00:14:08.200 fused_ordering(283) 00:14:08.200 fused_ordering(284) 00:14:08.200 fused_ordering(285) 00:14:08.200 fused_ordering(286) 00:14:08.200 fused_ordering(287) 00:14:08.200 fused_ordering(288) 00:14:08.200 fused_ordering(289) 00:14:08.200 fused_ordering(290) 00:14:08.200 fused_ordering(291) 00:14:08.200 fused_ordering(292) 00:14:08.200 fused_ordering(293) 00:14:08.200 fused_ordering(294) 00:14:08.200 fused_ordering(295) 00:14:08.200 fused_ordering(296) 00:14:08.200 fused_ordering(297) 00:14:08.200 fused_ordering(298) 00:14:08.200 fused_ordering(299) 00:14:08.200 fused_ordering(300) 00:14:08.200 fused_ordering(301) 00:14:08.200 fused_ordering(302) 00:14:08.200 fused_ordering(303) 00:14:08.200 fused_ordering(304) 00:14:08.200 fused_ordering(305) 00:14:08.200 fused_ordering(306) 00:14:08.200 fused_ordering(307) 00:14:08.200 fused_ordering(308) 00:14:08.200 fused_ordering(309) 00:14:08.200 fused_ordering(310) 00:14:08.200 fused_ordering(311) 00:14:08.200 fused_ordering(312) 00:14:08.200 fused_ordering(313) 00:14:08.200 fused_ordering(314) 00:14:08.200 fused_ordering(315) 00:14:08.200 fused_ordering(316) 00:14:08.200 fused_ordering(317) 00:14:08.200 fused_ordering(318) 00:14:08.200 fused_ordering(319) 00:14:08.200 fused_ordering(320) 00:14:08.200 fused_ordering(321) 00:14:08.200 fused_ordering(322) 00:14:08.200 fused_ordering(323) 00:14:08.200 fused_ordering(324) 00:14:08.200 fused_ordering(325) 00:14:08.200 fused_ordering(326) 00:14:08.200 fused_ordering(327) 00:14:08.200 fused_ordering(328) 00:14:08.200 fused_ordering(329) 00:14:08.200 fused_ordering(330) 00:14:08.200 fused_ordering(331) 00:14:08.200 fused_ordering(332) 00:14:08.200 fused_ordering(333) 00:14:08.200 fused_ordering(334) 00:14:08.200 fused_ordering(335) 00:14:08.200 fused_ordering(336) 00:14:08.200 fused_ordering(337) 00:14:08.200 fused_ordering(338) 00:14:08.200 fused_ordering(339) 00:14:08.200 fused_ordering(340) 00:14:08.200 fused_ordering(341) 00:14:08.200 fused_ordering(342) 00:14:08.200 fused_ordering(343) 00:14:08.200 fused_ordering(344) 00:14:08.200 fused_ordering(345) 00:14:08.200 fused_ordering(346) 00:14:08.200 fused_ordering(347) 00:14:08.200 fused_ordering(348) 00:14:08.200 fused_ordering(349) 00:14:08.200 fused_ordering(350) 00:14:08.200 fused_ordering(351) 00:14:08.200 fused_ordering(352) 00:14:08.200 fused_ordering(353) 00:14:08.200 fused_ordering(354) 00:14:08.200 fused_ordering(355) 00:14:08.200 fused_ordering(356) 00:14:08.200 fused_ordering(357) 00:14:08.200 fused_ordering(358) 00:14:08.200 fused_ordering(359) 00:14:08.200 fused_ordering(360) 00:14:08.200 fused_ordering(361) 00:14:08.200 fused_ordering(362) 00:14:08.200 fused_ordering(363) 00:14:08.200 fused_ordering(364) 00:14:08.200 fused_ordering(365) 00:14:08.200 fused_ordering(366) 00:14:08.200 fused_ordering(367) 00:14:08.200 fused_ordering(368) 00:14:08.200 fused_ordering(369) 00:14:08.200 fused_ordering(370) 00:14:08.200 fused_ordering(371) 00:14:08.200 fused_ordering(372) 00:14:08.200 fused_ordering(373) 00:14:08.200 fused_ordering(374) 00:14:08.200 fused_ordering(375) 00:14:08.200 fused_ordering(376) 00:14:08.200 fused_ordering(377) 00:14:08.200 fused_ordering(378) 00:14:08.200 fused_ordering(379) 00:14:08.200 fused_ordering(380) 00:14:08.200 fused_ordering(381) 00:14:08.200 fused_ordering(382) 00:14:08.200 fused_ordering(383) 00:14:08.200 fused_ordering(384) 00:14:08.200 fused_ordering(385) 00:14:08.200 fused_ordering(386) 00:14:08.200 fused_ordering(387) 00:14:08.200 fused_ordering(388) 00:14:08.200 fused_ordering(389) 00:14:08.200 fused_ordering(390) 00:14:08.200 fused_ordering(391) 00:14:08.200 fused_ordering(392) 00:14:08.200 fused_ordering(393) 00:14:08.200 fused_ordering(394) 00:14:08.200 fused_ordering(395) 00:14:08.200 fused_ordering(396) 00:14:08.200 fused_ordering(397) 00:14:08.200 fused_ordering(398) 00:14:08.200 fused_ordering(399) 00:14:08.200 fused_ordering(400) 00:14:08.201 fused_ordering(401) 00:14:08.201 fused_ordering(402) 00:14:08.201 fused_ordering(403) 00:14:08.201 fused_ordering(404) 00:14:08.201 fused_ordering(405) 00:14:08.201 fused_ordering(406) 00:14:08.201 fused_ordering(407) 00:14:08.201 fused_ordering(408) 00:14:08.201 fused_ordering(409) 00:14:08.201 fused_ordering(410) 00:14:08.459 fused_ordering(411) 00:14:08.459 fused_ordering(412) 00:14:08.459 fused_ordering(413) 00:14:08.459 fused_ordering(414) 00:14:08.459 fused_ordering(415) 00:14:08.459 fused_ordering(416) 00:14:08.459 fused_ordering(417) 00:14:08.459 fused_ordering(418) 00:14:08.459 fused_ordering(419) 00:14:08.459 fused_ordering(420) 00:14:08.459 fused_ordering(421) 00:14:08.459 fused_ordering(422) 00:14:08.459 fused_ordering(423) 00:14:08.459 fused_ordering(424) 00:14:08.459 fused_ordering(425) 00:14:08.459 fused_ordering(426) 00:14:08.459 fused_ordering(427) 00:14:08.459 fused_ordering(428) 00:14:08.459 fused_ordering(429) 00:14:08.459 fused_ordering(430) 00:14:08.459 fused_ordering(431) 00:14:08.459 fused_ordering(432) 00:14:08.459 fused_ordering(433) 00:14:08.459 fused_ordering(434) 00:14:08.459 fused_ordering(435) 00:14:08.459 fused_ordering(436) 00:14:08.459 fused_ordering(437) 00:14:08.459 fused_ordering(438) 00:14:08.459 fused_ordering(439) 00:14:08.459 fused_ordering(440) 00:14:08.459 fused_ordering(441) 00:14:08.459 fused_ordering(442) 00:14:08.459 fused_ordering(443) 00:14:08.459 fused_ordering(444) 00:14:08.459 fused_ordering(445) 00:14:08.459 fused_ordering(446) 00:14:08.459 fused_ordering(447) 00:14:08.459 fused_ordering(448) 00:14:08.459 fused_ordering(449) 00:14:08.459 fused_ordering(450) 00:14:08.459 fused_ordering(451) 00:14:08.459 fused_ordering(452) 00:14:08.459 fused_ordering(453) 00:14:08.459 fused_ordering(454) 00:14:08.459 fused_ordering(455) 00:14:08.459 fused_ordering(456) 00:14:08.459 fused_ordering(457) 00:14:08.459 fused_ordering(458) 00:14:08.459 fused_ordering(459) 00:14:08.459 fused_ordering(460) 00:14:08.459 fused_ordering(461) 00:14:08.459 fused_ordering(462) 00:14:08.459 fused_ordering(463) 00:14:08.459 fused_ordering(464) 00:14:08.459 fused_ordering(465) 00:14:08.459 fused_ordering(466) 00:14:08.459 fused_ordering(467) 00:14:08.459 fused_ordering(468) 00:14:08.459 fused_ordering(469) 00:14:08.459 fused_ordering(470) 00:14:08.459 fused_ordering(471) 00:14:08.459 fused_ordering(472) 00:14:08.459 fused_ordering(473) 00:14:08.459 fused_ordering(474) 00:14:08.459 fused_ordering(475) 00:14:08.459 fused_ordering(476) 00:14:08.459 fused_ordering(477) 00:14:08.459 fused_ordering(478) 00:14:08.459 fused_ordering(479) 00:14:08.459 fused_ordering(480) 00:14:08.459 fused_ordering(481) 00:14:08.459 fused_ordering(482) 00:14:08.459 fused_ordering(483) 00:14:08.459 fused_ordering(484) 00:14:08.459 fused_ordering(485) 00:14:08.459 fused_ordering(486) 00:14:08.459 fused_ordering(487) 00:14:08.459 fused_ordering(488) 00:14:08.459 fused_ordering(489) 00:14:08.459 fused_ordering(490) 00:14:08.459 fused_ordering(491) 00:14:08.459 fused_ordering(492) 00:14:08.459 fused_ordering(493) 00:14:08.459 fused_ordering(494) 00:14:08.459 fused_ordering(495) 00:14:08.459 fused_ordering(496) 00:14:08.459 fused_ordering(497) 00:14:08.459 fused_ordering(498) 00:14:08.459 fused_ordering(499) 00:14:08.459 fused_ordering(500) 00:14:08.459 fused_ordering(501) 00:14:08.459 fused_ordering(502) 00:14:08.459 fused_ordering(503) 00:14:08.459 fused_ordering(504) 00:14:08.459 fused_ordering(505) 00:14:08.459 fused_ordering(506) 00:14:08.459 fused_ordering(507) 00:14:08.459 fused_ordering(508) 00:14:08.459 fused_ordering(509) 00:14:08.459 fused_ordering(510) 00:14:08.459 fused_ordering(511) 00:14:08.459 fused_ordering(512) 00:14:08.459 fused_ordering(513) 00:14:08.459 fused_ordering(514) 00:14:08.459 fused_ordering(515) 00:14:08.459 fused_ordering(516) 00:14:08.459 fused_ordering(517) 00:14:08.459 fused_ordering(518) 00:14:08.459 fused_ordering(519) 00:14:08.459 fused_ordering(520) 00:14:08.459 fused_ordering(521) 00:14:08.459 fused_ordering(522) 00:14:08.459 fused_ordering(523) 00:14:08.459 fused_ordering(524) 00:14:08.459 fused_ordering(525) 00:14:08.459 fused_ordering(526) 00:14:08.459 fused_ordering(527) 00:14:08.459 fused_ordering(528) 00:14:08.459 fused_ordering(529) 00:14:08.459 fused_ordering(530) 00:14:08.459 fused_ordering(531) 00:14:08.459 fused_ordering(532) 00:14:08.459 fused_ordering(533) 00:14:08.459 fused_ordering(534) 00:14:08.459 fused_ordering(535) 00:14:08.459 fused_ordering(536) 00:14:08.459 fused_ordering(537) 00:14:08.459 fused_ordering(538) 00:14:08.459 fused_ordering(539) 00:14:08.459 fused_ordering(540) 00:14:08.459 fused_ordering(541) 00:14:08.459 fused_ordering(542) 00:14:08.459 fused_ordering(543) 00:14:08.459 fused_ordering(544) 00:14:08.459 fused_ordering(545) 00:14:08.459 fused_ordering(546) 00:14:08.459 fused_ordering(547) 00:14:08.459 fused_ordering(548) 00:14:08.459 fused_ordering(549) 00:14:08.459 fused_ordering(550) 00:14:08.459 fused_ordering(551) 00:14:08.460 fused_ordering(552) 00:14:08.460 fused_ordering(553) 00:14:08.460 fused_ordering(554) 00:14:08.460 fused_ordering(555) 00:14:08.460 fused_ordering(556) 00:14:08.460 fused_ordering(557) 00:14:08.460 fused_ordering(558) 00:14:08.460 fused_ordering(559) 00:14:08.460 fused_ordering(560) 00:14:08.460 fused_ordering(561) 00:14:08.460 fused_ordering(562) 00:14:08.460 fused_ordering(563) 00:14:08.460 fused_ordering(564) 00:14:08.460 fused_ordering(565) 00:14:08.460 fused_ordering(566) 00:14:08.460 fused_ordering(567) 00:14:08.460 fused_ordering(568) 00:14:08.460 fused_ordering(569) 00:14:08.460 fused_ordering(570) 00:14:08.460 fused_ordering(571) 00:14:08.460 fused_ordering(572) 00:14:08.460 fused_ordering(573) 00:14:08.460 fused_ordering(574) 00:14:08.460 fused_ordering(575) 00:14:08.460 fused_ordering(576) 00:14:08.460 fused_ordering(577) 00:14:08.460 fused_ordering(578) 00:14:08.460 fused_ordering(579) 00:14:08.460 fused_ordering(580) 00:14:08.460 fused_ordering(581) 00:14:08.460 fused_ordering(582) 00:14:08.460 fused_ordering(583) 00:14:08.460 fused_ordering(584) 00:14:08.460 fused_ordering(585) 00:14:08.460 fused_ordering(586) 00:14:08.460 fused_ordering(587) 00:14:08.460 fused_ordering(588) 00:14:08.460 fused_ordering(589) 00:14:08.460 fused_ordering(590) 00:14:08.460 fused_ordering(591) 00:14:08.460 fused_ordering(592) 00:14:08.460 fused_ordering(593) 00:14:08.460 fused_ordering(594) 00:14:08.460 fused_ordering(595) 00:14:08.460 fused_ordering(596) 00:14:08.460 fused_ordering(597) 00:14:08.460 fused_ordering(598) 00:14:08.460 fused_ordering(599) 00:14:08.460 fused_ordering(600) 00:14:08.460 fused_ordering(601) 00:14:08.460 fused_ordering(602) 00:14:08.460 fused_ordering(603) 00:14:08.460 fused_ordering(604) 00:14:08.460 fused_ordering(605) 00:14:08.460 fused_ordering(606) 00:14:08.460 fused_ordering(607) 00:14:08.460 fused_ordering(608) 00:14:08.460 fused_ordering(609) 00:14:08.460 fused_ordering(610) 00:14:08.460 fused_ordering(611) 00:14:08.460 fused_ordering(612) 00:14:08.460 fused_ordering(613) 00:14:08.460 fused_ordering(614) 00:14:08.460 fused_ordering(615) 00:14:09.025 fused_ordering(616) 00:14:09.025 fused_ordering(617) 00:14:09.025 fused_ordering(618) 00:14:09.025 fused_ordering(619) 00:14:09.025 fused_ordering(620) 00:14:09.025 fused_ordering(621) 00:14:09.025 fused_ordering(622) 00:14:09.025 fused_ordering(623) 00:14:09.025 fused_ordering(624) 00:14:09.025 fused_ordering(625) 00:14:09.025 fused_ordering(626) 00:14:09.025 fused_ordering(627) 00:14:09.025 fused_ordering(628) 00:14:09.025 fused_ordering(629) 00:14:09.025 fused_ordering(630) 00:14:09.025 fused_ordering(631) 00:14:09.025 fused_ordering(632) 00:14:09.025 fused_ordering(633) 00:14:09.025 fused_ordering(634) 00:14:09.025 fused_ordering(635) 00:14:09.025 fused_ordering(636) 00:14:09.025 fused_ordering(637) 00:14:09.025 fused_ordering(638) 00:14:09.025 fused_ordering(639) 00:14:09.025 fused_ordering(640) 00:14:09.025 fused_ordering(641) 00:14:09.025 fused_ordering(642) 00:14:09.025 fused_ordering(643) 00:14:09.025 fused_ordering(644) 00:14:09.025 fused_ordering(645) 00:14:09.025 fused_ordering(646) 00:14:09.025 fused_ordering(647) 00:14:09.025 fused_ordering(648) 00:14:09.025 fused_ordering(649) 00:14:09.025 fused_ordering(650) 00:14:09.025 fused_ordering(651) 00:14:09.025 fused_ordering(652) 00:14:09.025 fused_ordering(653) 00:14:09.025 fused_ordering(654) 00:14:09.025 fused_ordering(655) 00:14:09.025 fused_ordering(656) 00:14:09.025 fused_ordering(657) 00:14:09.025 fused_ordering(658) 00:14:09.025 fused_ordering(659) 00:14:09.025 fused_ordering(660) 00:14:09.025 fused_ordering(661) 00:14:09.025 fused_ordering(662) 00:14:09.025 fused_ordering(663) 00:14:09.025 fused_ordering(664) 00:14:09.025 fused_ordering(665) 00:14:09.025 fused_ordering(666) 00:14:09.025 fused_ordering(667) 00:14:09.025 fused_ordering(668) 00:14:09.025 fused_ordering(669) 00:14:09.025 fused_ordering(670) 00:14:09.025 fused_ordering(671) 00:14:09.025 fused_ordering(672) 00:14:09.025 fused_ordering(673) 00:14:09.025 fused_ordering(674) 00:14:09.025 fused_ordering(675) 00:14:09.025 fused_ordering(676) 00:14:09.025 fused_ordering(677) 00:14:09.025 fused_ordering(678) 00:14:09.025 fused_ordering(679) 00:14:09.025 fused_ordering(680) 00:14:09.025 fused_ordering(681) 00:14:09.025 fused_ordering(682) 00:14:09.025 fused_ordering(683) 00:14:09.025 fused_ordering(684) 00:14:09.025 fused_ordering(685) 00:14:09.025 fused_ordering(686) 00:14:09.025 fused_ordering(687) 00:14:09.025 fused_ordering(688) 00:14:09.025 fused_ordering(689) 00:14:09.025 fused_ordering(690) 00:14:09.025 fused_ordering(691) 00:14:09.025 fused_ordering(692) 00:14:09.025 fused_ordering(693) 00:14:09.025 fused_ordering(694) 00:14:09.025 fused_ordering(695) 00:14:09.025 fused_ordering(696) 00:14:09.025 fused_ordering(697) 00:14:09.025 fused_ordering(698) 00:14:09.025 fused_ordering(699) 00:14:09.025 fused_ordering(700) 00:14:09.025 fused_ordering(701) 00:14:09.025 fused_ordering(702) 00:14:09.025 fused_ordering(703) 00:14:09.025 fused_ordering(704) 00:14:09.025 fused_ordering(705) 00:14:09.025 fused_ordering(706) 00:14:09.025 fused_ordering(707) 00:14:09.025 fused_ordering(708) 00:14:09.025 fused_ordering(709) 00:14:09.025 fused_ordering(710) 00:14:09.025 fused_ordering(711) 00:14:09.025 fused_ordering(712) 00:14:09.025 fused_ordering(713) 00:14:09.025 fused_ordering(714) 00:14:09.025 fused_ordering(715) 00:14:09.025 fused_ordering(716) 00:14:09.025 fused_ordering(717) 00:14:09.025 fused_ordering(718) 00:14:09.025 fused_ordering(719) 00:14:09.025 fused_ordering(720) 00:14:09.025 fused_ordering(721) 00:14:09.025 fused_ordering(722) 00:14:09.025 fused_ordering(723) 00:14:09.025 fused_ordering(724) 00:14:09.025 fused_ordering(725) 00:14:09.025 fused_ordering(726) 00:14:09.025 fused_ordering(727) 00:14:09.025 fused_ordering(728) 00:14:09.025 fused_ordering(729) 00:14:09.025 fused_ordering(730) 00:14:09.025 fused_ordering(731) 00:14:09.025 fused_ordering(732) 00:14:09.025 fused_ordering(733) 00:14:09.025 fused_ordering(734) 00:14:09.025 fused_ordering(735) 00:14:09.025 fused_ordering(736) 00:14:09.025 fused_ordering(737) 00:14:09.025 fused_ordering(738) 00:14:09.025 fused_ordering(739) 00:14:09.025 fused_ordering(740) 00:14:09.025 fused_ordering(741) 00:14:09.025 fused_ordering(742) 00:14:09.025 fused_ordering(743) 00:14:09.025 fused_ordering(744) 00:14:09.025 fused_ordering(745) 00:14:09.025 fused_ordering(746) 00:14:09.025 fused_ordering(747) 00:14:09.025 fused_ordering(748) 00:14:09.025 fused_ordering(749) 00:14:09.025 fused_ordering(750) 00:14:09.025 fused_ordering(751) 00:14:09.025 fused_ordering(752) 00:14:09.025 fused_ordering(753) 00:14:09.025 fused_ordering(754) 00:14:09.025 fused_ordering(755) 00:14:09.025 fused_ordering(756) 00:14:09.025 fused_ordering(757) 00:14:09.025 fused_ordering(758) 00:14:09.025 fused_ordering(759) 00:14:09.025 fused_ordering(760) 00:14:09.025 fused_ordering(761) 00:14:09.025 fused_ordering(762) 00:14:09.025 fused_ordering(763) 00:14:09.025 fused_ordering(764) 00:14:09.025 fused_ordering(765) 00:14:09.025 fused_ordering(766) 00:14:09.025 fused_ordering(767) 00:14:09.025 fused_ordering(768) 00:14:09.025 fused_ordering(769) 00:14:09.025 fused_ordering(770) 00:14:09.025 fused_ordering(771) 00:14:09.025 fused_ordering(772) 00:14:09.025 fused_ordering(773) 00:14:09.025 fused_ordering(774) 00:14:09.025 fused_ordering(775) 00:14:09.025 fused_ordering(776) 00:14:09.025 fused_ordering(777) 00:14:09.025 fused_ordering(778) 00:14:09.025 fused_ordering(779) 00:14:09.025 fused_ordering(780) 00:14:09.025 fused_ordering(781) 00:14:09.025 fused_ordering(782) 00:14:09.025 fused_ordering(783) 00:14:09.025 fused_ordering(784) 00:14:09.025 fused_ordering(785) 00:14:09.025 fused_ordering(786) 00:14:09.025 fused_ordering(787) 00:14:09.025 fused_ordering(788) 00:14:09.025 fused_ordering(789) 00:14:09.025 fused_ordering(790) 00:14:09.025 fused_ordering(791) 00:14:09.025 fused_ordering(792) 00:14:09.025 fused_ordering(793) 00:14:09.025 fused_ordering(794) 00:14:09.025 fused_ordering(795) 00:14:09.025 fused_ordering(796) 00:14:09.025 fused_ordering(797) 00:14:09.025 fused_ordering(798) 00:14:09.025 fused_ordering(799) 00:14:09.025 fused_ordering(800) 00:14:09.025 fused_ordering(801) 00:14:09.025 fused_ordering(802) 00:14:09.025 fused_ordering(803) 00:14:09.025 fused_ordering(804) 00:14:09.025 fused_ordering(805) 00:14:09.025 fused_ordering(806) 00:14:09.025 fused_ordering(807) 00:14:09.025 fused_ordering(808) 00:14:09.025 fused_ordering(809) 00:14:09.025 fused_ordering(810) 00:14:09.025 fused_ordering(811) 00:14:09.025 fused_ordering(812) 00:14:09.025 fused_ordering(813) 00:14:09.025 fused_ordering(814) 00:14:09.025 fused_ordering(815) 00:14:09.025 fused_ordering(816) 00:14:09.025 fused_ordering(817) 00:14:09.025 fused_ordering(818) 00:14:09.025 fused_ordering(819) 00:14:09.025 fused_ordering(820) 00:14:09.592 fused_ordering(821) 00:14:09.592 fused_ordering(822) 00:14:09.592 fused_ordering(823) 00:14:09.592 fused_ordering(824) 00:14:09.592 fused_ordering(825) 00:14:09.592 fused_ordering(826) 00:14:09.592 fused_ordering(827) 00:14:09.592 fused_ordering(828) 00:14:09.592 fused_ordering(829) 00:14:09.592 fused_ordering(830) 00:14:09.592 fused_ordering(831) 00:14:09.592 fused_ordering(832) 00:14:09.592 fused_ordering(833) 00:14:09.592 fused_ordering(834) 00:14:09.592 fused_ordering(835) 00:14:09.592 fused_ordering(836) 00:14:09.592 fused_ordering(837) 00:14:09.592 fused_ordering(838) 00:14:09.592 fused_ordering(839) 00:14:09.592 fused_ordering(840) 00:14:09.592 fused_ordering(841) 00:14:09.592 fused_ordering(842) 00:14:09.592 fused_ordering(843) 00:14:09.592 fused_ordering(844) 00:14:09.592 fused_ordering(845) 00:14:09.592 fused_ordering(846) 00:14:09.592 fused_ordering(847) 00:14:09.592 fused_ordering(848) 00:14:09.592 fused_ordering(849) 00:14:09.592 fused_ordering(850) 00:14:09.592 fused_ordering(851) 00:14:09.592 fused_ordering(852) 00:14:09.592 fused_ordering(853) 00:14:09.592 fused_ordering(854) 00:14:09.592 fused_ordering(855) 00:14:09.592 fused_ordering(856) 00:14:09.592 fused_ordering(857) 00:14:09.592 fused_ordering(858) 00:14:09.592 fused_ordering(859) 00:14:09.592 fused_ordering(860) 00:14:09.592 fused_ordering(861) 00:14:09.592 fused_ordering(862) 00:14:09.592 fused_ordering(863) 00:14:09.592 fused_ordering(864) 00:14:09.592 fused_ordering(865) 00:14:09.592 fused_ordering(866) 00:14:09.592 fused_ordering(867) 00:14:09.592 fused_ordering(868) 00:14:09.592 fused_ordering(869) 00:14:09.592 fused_ordering(870) 00:14:09.592 fused_ordering(871) 00:14:09.592 fused_ordering(872) 00:14:09.592 fused_ordering(873) 00:14:09.592 fused_ordering(874) 00:14:09.592 fused_ordering(875) 00:14:09.592 fused_ordering(876) 00:14:09.592 fused_ordering(877) 00:14:09.592 fused_ordering(878) 00:14:09.592 fused_ordering(879) 00:14:09.592 fused_ordering(880) 00:14:09.592 fused_ordering(881) 00:14:09.592 fused_ordering(882) 00:14:09.592 fused_ordering(883) 00:14:09.592 fused_ordering(884) 00:14:09.592 fused_ordering(885) 00:14:09.592 fused_ordering(886) 00:14:09.592 fused_ordering(887) 00:14:09.592 fused_ordering(888) 00:14:09.592 fused_ordering(889) 00:14:09.592 fused_ordering(890) 00:14:09.592 fused_ordering(891) 00:14:09.592 fused_ordering(892) 00:14:09.592 fused_ordering(893) 00:14:09.592 fused_ordering(894) 00:14:09.592 fused_ordering(895) 00:14:09.592 fused_ordering(896) 00:14:09.592 fused_ordering(897) 00:14:09.592 fused_ordering(898) 00:14:09.592 fused_ordering(899) 00:14:09.592 fused_ordering(900) 00:14:09.592 fused_ordering(901) 00:14:09.592 fused_ordering(902) 00:14:09.592 fused_ordering(903) 00:14:09.592 fused_ordering(904) 00:14:09.592 fused_ordering(905) 00:14:09.592 fused_ordering(906) 00:14:09.592 fused_ordering(907) 00:14:09.592 fused_ordering(908) 00:14:09.592 fused_ordering(909) 00:14:09.592 fused_ordering(910) 00:14:09.592 fused_ordering(911) 00:14:09.592 fused_ordering(912) 00:14:09.592 fused_ordering(913) 00:14:09.592 fused_ordering(914) 00:14:09.592 fused_ordering(915) 00:14:09.592 fused_ordering(916) 00:14:09.592 fused_ordering(917) 00:14:09.592 fused_ordering(918) 00:14:09.592 fused_ordering(919) 00:14:09.592 fused_ordering(920) 00:14:09.592 fused_ordering(921) 00:14:09.592 fused_ordering(922) 00:14:09.592 fused_ordering(923) 00:14:09.592 fused_ordering(924) 00:14:09.592 fused_ordering(925) 00:14:09.592 fused_ordering(926) 00:14:09.592 fused_ordering(927) 00:14:09.592 fused_ordering(928) 00:14:09.592 fused_ordering(929) 00:14:09.592 fused_ordering(930) 00:14:09.592 fused_ordering(931) 00:14:09.592 fused_ordering(932) 00:14:09.592 fused_ordering(933) 00:14:09.592 fused_ordering(934) 00:14:09.592 fused_ordering(935) 00:14:09.592 fused_ordering(936) 00:14:09.592 fused_ordering(937) 00:14:09.592 fused_ordering(938) 00:14:09.592 fused_ordering(939) 00:14:09.592 fused_ordering(940) 00:14:09.592 fused_ordering(941) 00:14:09.592 fused_ordering(942) 00:14:09.592 fused_ordering(943) 00:14:09.592 fused_ordering(944) 00:14:09.592 fused_ordering(945) 00:14:09.592 fused_ordering(946) 00:14:09.592 fused_ordering(947) 00:14:09.592 fused_ordering(948) 00:14:09.592 fused_ordering(949) 00:14:09.592 fused_ordering(950) 00:14:09.592 fused_ordering(951) 00:14:09.592 fused_ordering(952) 00:14:09.592 fused_ordering(953) 00:14:09.592 fused_ordering(954) 00:14:09.592 fused_ordering(955) 00:14:09.592 fused_ordering(956) 00:14:09.592 fused_ordering(957) 00:14:09.592 fused_ordering(958) 00:14:09.592 fused_ordering(959) 00:14:09.592 fused_ordering(960) 00:14:09.592 fused_ordering(961) 00:14:09.592 fused_ordering(962) 00:14:09.592 fused_ordering(963) 00:14:09.592 fused_ordering(964) 00:14:09.592 fused_ordering(965) 00:14:09.592 fused_ordering(966) 00:14:09.592 fused_ordering(967) 00:14:09.592 fused_ordering(968) 00:14:09.592 fused_ordering(969) 00:14:09.592 fused_ordering(970) 00:14:09.592 fused_ordering(971) 00:14:09.592 fused_ordering(972) 00:14:09.592 fused_ordering(973) 00:14:09.592 fused_ordering(974) 00:14:09.592 fused_ordering(975) 00:14:09.592 fused_ordering(976) 00:14:09.592 fused_ordering(977) 00:14:09.592 fused_ordering(978) 00:14:09.592 fused_ordering(979) 00:14:09.592 fused_ordering(980) 00:14:09.592 fused_ordering(981) 00:14:09.592 fused_ordering(982) 00:14:09.592 fused_ordering(983) 00:14:09.592 fused_ordering(984) 00:14:09.592 fused_ordering(985) 00:14:09.592 fused_ordering(986) 00:14:09.592 fused_ordering(987) 00:14:09.592 fused_ordering(988) 00:14:09.592 fused_ordering(989) 00:14:09.592 fused_ordering(990) 00:14:09.592 fused_ordering(991) 00:14:09.592 fused_ordering(992) 00:14:09.592 fused_ordering(993) 00:14:09.592 fused_ordering(994) 00:14:09.592 fused_ordering(995) 00:14:09.592 fused_ordering(996) 00:14:09.592 fused_ordering(997) 00:14:09.592 fused_ordering(998) 00:14:09.592 fused_ordering(999) 00:14:09.592 fused_ordering(1000) 00:14:09.592 fused_ordering(1001) 00:14:09.592 fused_ordering(1002) 00:14:09.592 fused_ordering(1003) 00:14:09.592 fused_ordering(1004) 00:14:09.592 fused_ordering(1005) 00:14:09.592 fused_ordering(1006) 00:14:09.592 fused_ordering(1007) 00:14:09.592 fused_ordering(1008) 00:14:09.592 fused_ordering(1009) 00:14:09.592 fused_ordering(1010) 00:14:09.592 fused_ordering(1011) 00:14:09.592 fused_ordering(1012) 00:14:09.592 fused_ordering(1013) 00:14:09.592 fused_ordering(1014) 00:14:09.592 fused_ordering(1015) 00:14:09.592 fused_ordering(1016) 00:14:09.592 fused_ordering(1017) 00:14:09.592 fused_ordering(1018) 00:14:09.592 fused_ordering(1019) 00:14:09.592 fused_ordering(1020) 00:14:09.592 fused_ordering(1021) 00:14:09.592 fused_ordering(1022) 00:14:09.592 fused_ordering(1023) 00:14:09.592 06:32:01 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:09.592 06:32:01 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:09.592 06:32:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:09.592 06:32:01 -- nvmf/common.sh@116 -- # sync 00:14:09.592 06:32:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:09.592 06:32:02 -- nvmf/common.sh@119 -- # set +e 00:14:09.593 06:32:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:09.593 06:32:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:09.593 rmmod nvme_tcp 00:14:09.593 rmmod nvme_fabrics 00:14:09.593 rmmod nvme_keyring 00:14:09.593 06:32:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:09.593 06:32:02 -- nvmf/common.sh@123 -- # set -e 00:14:09.593 06:32:02 -- nvmf/common.sh@124 -- # return 0 00:14:09.593 06:32:02 -- nvmf/common.sh@477 -- # '[' -n 81854 ']' 00:14:09.593 06:32:02 -- nvmf/common.sh@478 -- # killprocess 81854 00:14:09.593 06:32:02 -- common/autotest_common.sh@926 -- # '[' -z 81854 ']' 00:14:09.593 06:32:02 -- common/autotest_common.sh@930 -- # kill -0 81854 00:14:09.593 06:32:02 -- common/autotest_common.sh@931 -- # uname 00:14:09.593 06:32:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:09.593 06:32:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81854 00:14:09.593 06:32:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:09.593 06:32:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:09.593 killing process with pid 81854 00:14:09.593 06:32:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81854' 00:14:09.593 06:32:02 -- common/autotest_common.sh@945 -- # kill 81854 00:14:09.593 06:32:02 -- common/autotest_common.sh@950 -- # wait 81854 00:14:09.852 06:32:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:09.852 06:32:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:09.852 06:32:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:09.852 06:32:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.852 06:32:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:09.852 06:32:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.852 06:32:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.852 06:32:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.852 06:32:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:09.852 00:14:09.852 real 0m4.096s 00:14:09.852 user 0m4.667s 00:14:09.852 sys 0m1.491s 00:14:09.852 06:32:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.852 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:09.852 ************************************ 00:14:09.852 END TEST nvmf_fused_ordering 00:14:09.852 ************************************ 00:14:09.852 06:32:02 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:09.852 06:32:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:09.852 06:32:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.852 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:09.852 ************************************ 00:14:09.852 START TEST nvmf_delete_subsystem 00:14:09.852 ************************************ 00:14:09.852 06:32:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:10.111 * Looking for test storage... 00:14:10.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.111 06:32:02 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.111 06:32:02 -- nvmf/common.sh@7 -- # uname -s 00:14:10.111 06:32:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.111 06:32:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.111 06:32:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.111 06:32:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.111 06:32:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.111 06:32:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.111 06:32:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.111 06:32:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.111 06:32:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.111 06:32:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.111 06:32:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:10.111 06:32:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:10.111 06:32:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.111 06:32:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.111 06:32:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.111 06:32:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.111 06:32:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.111 06:32:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.111 06:32:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.111 06:32:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.111 06:32:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.112 06:32:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.112 06:32:02 -- paths/export.sh@5 -- # export PATH 00:14:10.112 06:32:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.112 06:32:02 -- nvmf/common.sh@46 -- # : 0 00:14:10.112 06:32:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:10.112 06:32:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:10.112 06:32:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:10.112 06:32:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.112 06:32:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.112 06:32:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:10.112 06:32:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:10.112 06:32:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:10.112 06:32:02 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:10.112 06:32:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:10.112 06:32:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.112 06:32:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:10.112 06:32:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:10.112 06:32:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:10.112 06:32:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.112 06:32:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.112 06:32:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.112 06:32:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:10.112 06:32:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:10.112 06:32:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:10.112 06:32:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:10.112 06:32:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:10.112 06:32:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:10.112 06:32:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.112 06:32:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.112 06:32:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:10.112 06:32:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:10.112 06:32:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.112 06:32:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.112 06:32:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.112 06:32:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.112 06:32:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.112 06:32:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.112 06:32:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.112 06:32:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.112 06:32:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:10.112 06:32:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:10.112 Cannot find device "nvmf_tgt_br" 00:14:10.112 06:32:02 -- nvmf/common.sh@154 -- # true 00:14:10.112 06:32:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.112 Cannot find device "nvmf_tgt_br2" 00:14:10.112 06:32:02 -- nvmf/common.sh@155 -- # true 00:14:10.112 06:32:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:10.112 06:32:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:10.112 Cannot find device "nvmf_tgt_br" 00:14:10.112 06:32:02 -- nvmf/common.sh@157 -- # true 00:14:10.112 06:32:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:10.112 Cannot find device "nvmf_tgt_br2" 00:14:10.112 06:32:02 -- nvmf/common.sh@158 -- # true 00:14:10.112 06:32:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:10.112 06:32:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:10.112 06:32:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.112 06:32:02 -- nvmf/common.sh@161 -- # true 00:14:10.112 06:32:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.112 06:32:02 -- nvmf/common.sh@162 -- # true 00:14:10.112 06:32:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.112 06:32:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.112 06:32:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.112 06:32:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.112 06:32:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.112 06:32:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.112 06:32:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.112 06:32:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:10.371 06:32:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:10.371 06:32:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:10.371 06:32:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:10.371 06:32:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:10.371 06:32:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:10.371 06:32:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.371 06:32:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.371 06:32:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.371 06:32:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:10.371 06:32:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:10.371 06:32:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.371 06:32:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.371 06:32:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.371 06:32:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.371 06:32:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.371 06:32:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:10.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:10.371 00:14:10.371 --- 10.0.0.2 ping statistics --- 00:14:10.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.371 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:10.371 06:32:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:10.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:10.371 00:14:10.371 --- 10.0.0.3 ping statistics --- 00:14:10.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.371 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:10.371 06:32:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:10.371 00:14:10.371 --- 10.0.0.1 ping statistics --- 00:14:10.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.371 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:10.371 06:32:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.371 06:32:02 -- nvmf/common.sh@421 -- # return 0 00:14:10.371 06:32:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:10.371 06:32:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.371 06:32:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:10.371 06:32:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:10.371 06:32:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.371 06:32:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:10.371 06:32:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:10.371 06:32:02 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:10.371 06:32:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:10.371 06:32:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:10.371 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:10.371 06:32:02 -- nvmf/common.sh@469 -- # nvmfpid=82114 00:14:10.371 06:32:02 -- nvmf/common.sh@470 -- # waitforlisten 82114 00:14:10.371 06:32:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:10.371 06:32:02 -- common/autotest_common.sh@819 -- # '[' -z 82114 ']' 00:14:10.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.371 06:32:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.371 06:32:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:10.371 06:32:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.371 06:32:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:10.371 06:32:02 -- common/autotest_common.sh@10 -- # set +x 00:14:10.371 [2024-10-04 06:32:02.973087] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:10.371 [2024-10-04 06:32:02.973167] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.630 [2024-10-04 06:32:03.110012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:10.630 [2024-10-04 06:32:03.180590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:10.630 [2024-10-04 06:32:03.180769] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.630 [2024-10-04 06:32:03.180787] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.630 [2024-10-04 06:32:03.180798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.630 [2024-10-04 06:32:03.180931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.630 [2024-10-04 06:32:03.181146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.565 06:32:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:11.565 06:32:03 -- common/autotest_common.sh@852 -- # return 0 00:14:11.565 06:32:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:11.565 06:32:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:11.565 06:32:03 -- common/autotest_common.sh@10 -- # set +x 00:14:11.565 06:32:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.565 06:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.565 06:32:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.565 [2024-10-04 06:32:04.051727] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.565 06:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:11.565 06:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.565 06:32:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.565 06:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.565 06:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.565 06:32:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.565 [2024-10-04 06:32:04.072035] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.565 06:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:11.565 06:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.565 06:32:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.565 NULL1 00:14:11.565 06:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:11.565 06:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.565 06:32:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.565 Delay0 00:14:11.565 06:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.565 06:32:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.565 06:32:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.565 06:32:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@28 -- # perf_pid=82165 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:11.565 06:32:04 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:11.823 [2024-10-04 06:32:04.263055] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:13.727 06:32:06 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.727 06:32:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.727 06:32:06 -- common/autotest_common.sh@10 -- # set +x 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 [2024-10-04 06:32:06.305842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d610 is same with the state(5) to be set 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 Write completed with error (sct=0, sc=8) 00:14:13.727 Read completed with error (sct=0, sc=8) 00:14:13.727 starting I/O failed: -6 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 [2024-10-04 06:32:06.308566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbfd8000c00 is same with the state(5) to be set 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Read completed with error (sct=0, sc=8) 00:14:13.728 Write completed with error (sct=0, sc=8) 00:14:14.663 [2024-10-04 06:32:07.277232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110040 is same with the state(5) to be set 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 [2024-10-04 06:32:07.308090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbfd800bf20 is same with the state(5) to be set 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.663 Write completed with error (sct=0, sc=8) 00:14:14.663 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 [2024-10-04 06:32:07.308373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d360 is same with the state(5) to be set 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 [2024-10-04 06:32:07.308542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114d8c0 is same with the state(5) to be set 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Write completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 Read completed with error (sct=0, sc=8) 00:14:14.664 [2024-10-04 06:32:07.309140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbfd800c600 is same with the state(5) to be set 00:14:14.664 [2024-10-04 06:32:07.310293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1110040 (9): Bad file descriptor 00:14:14.664 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:14.664 06:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:14.664 06:32:07 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:14.664 06:32:07 -- target/delete_subsystem.sh@35 -- # kill -0 82165 00:14:14.664 06:32:07 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:14.664 Initializing NVMe Controllers 00:14:14.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.664 Controller IO queue size 128, less than required. 00:14:14.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:14.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:14.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:14.664 Initialization complete. Launching workers. 00:14:14.664 ======================================================== 00:14:14.664 Latency(us) 00:14:14.664 Device Information : IOPS MiB/s Average min max 00:14:14.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.79 0.08 894801.04 570.20 1018677.72 00:14:14.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.36 0.08 909490.70 413.10 1018827.03 00:14:14.664 ======================================================== 00:14:14.664 Total : 335.15 0.16 902004.83 413.10 1018827.03 00:14:14.664 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@35 -- # kill -0 82165 00:14:15.231 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82165) - No such process 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@45 -- # NOT wait 82165 00:14:15.231 06:32:07 -- common/autotest_common.sh@640 -- # local es=0 00:14:15.231 06:32:07 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 82165 00:14:15.231 06:32:07 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:15.231 06:32:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:15.231 06:32:07 -- common/autotest_common.sh@632 -- # type -t wait 00:14:15.231 06:32:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:15.231 06:32:07 -- common/autotest_common.sh@643 -- # wait 82165 00:14:15.231 06:32:07 -- common/autotest_common.sh@643 -- # es=1 00:14:15.231 06:32:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:15.231 06:32:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:15.231 06:32:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:15.231 06:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.231 06:32:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.231 06:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.231 06:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.231 06:32:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.231 [2024-10-04 06:32:07.834204] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.231 06:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.231 06:32:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.231 06:32:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.231 06:32:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@54 -- # perf_pid=82211 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:15.231 06:32:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:15.490 [2024-10-04 06:32:08.005386] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:15.749 06:32:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:15.749 06:32:08 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:15.749 06:32:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:16.316 06:32:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:16.316 06:32:08 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:16.316 06:32:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:16.884 06:32:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:16.884 06:32:09 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:16.884 06:32:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:17.451 06:32:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:17.451 06:32:09 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:17.451 06:32:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:17.710 06:32:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:17.710 06:32:10 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:17.710 06:32:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:18.277 06:32:10 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:18.277 06:32:10 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:18.277 06:32:10 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:18.535 Initializing NVMe Controllers 00:14:18.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.535 Controller IO queue size 128, less than required. 00:14:18.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:18.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:18.535 Initialization complete. Launching workers. 00:14:18.535 ======================================================== 00:14:18.535 Latency(us) 00:14:18.535 Device Information : IOPS MiB/s Average min max 00:14:18.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004070.86 1000202.16 1016563.89 00:14:18.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006942.26 1000220.04 1042395.58 00:14:18.535 ======================================================== 00:14:18.535 Total : 256.00 0.12 1005506.56 1000202.16 1042395.58 00:14:18.535 00:14:18.794 06:32:11 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:18.794 06:32:11 -- target/delete_subsystem.sh@57 -- # kill -0 82211 00:14:18.794 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82211) - No such process 00:14:18.794 06:32:11 -- target/delete_subsystem.sh@67 -- # wait 82211 00:14:18.794 06:32:11 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:18.794 06:32:11 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:18.794 06:32:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:18.794 06:32:11 -- nvmf/common.sh@116 -- # sync 00:14:18.794 06:32:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:18.794 06:32:11 -- nvmf/common.sh@119 -- # set +e 00:14:18.794 06:32:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:18.794 06:32:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:18.794 rmmod nvme_tcp 00:14:18.794 rmmod nvme_fabrics 00:14:18.794 rmmod nvme_keyring 00:14:19.052 06:32:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:19.052 06:32:11 -- nvmf/common.sh@123 -- # set -e 00:14:19.052 06:32:11 -- nvmf/common.sh@124 -- # return 0 00:14:19.052 06:32:11 -- nvmf/common.sh@477 -- # '[' -n 82114 ']' 00:14:19.052 06:32:11 -- nvmf/common.sh@478 -- # killprocess 82114 00:14:19.052 06:32:11 -- common/autotest_common.sh@926 -- # '[' -z 82114 ']' 00:14:19.052 06:32:11 -- common/autotest_common.sh@930 -- # kill -0 82114 00:14:19.052 06:32:11 -- common/autotest_common.sh@931 -- # uname 00:14:19.052 06:32:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:19.052 06:32:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82114 00:14:19.052 06:32:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:19.052 06:32:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:19.052 06:32:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82114' 00:14:19.052 killing process with pid 82114 00:14:19.052 06:32:11 -- common/autotest_common.sh@945 -- # kill 82114 00:14:19.052 06:32:11 -- common/autotest_common.sh@950 -- # wait 82114 00:14:19.052 06:32:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:19.052 06:32:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:19.052 06:32:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:19.052 06:32:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.052 06:32:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:19.052 06:32:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.052 06:32:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.053 06:32:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.311 06:32:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:19.311 00:14:19.311 real 0m9.283s 00:14:19.311 user 0m29.318s 00:14:19.311 sys 0m1.068s 00:14:19.311 06:32:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.311 06:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:19.311 ************************************ 00:14:19.311 END TEST nvmf_delete_subsystem 00:14:19.311 ************************************ 00:14:19.311 06:32:11 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:19.311 06:32:11 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:19.311 06:32:11 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:19.311 06:32:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:19.311 06:32:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.311 06:32:11 -- common/autotest_common.sh@10 -- # set +x 00:14:19.311 ************************************ 00:14:19.311 START TEST nvmf_host_management 00:14:19.311 ************************************ 00:14:19.311 06:32:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:19.311 * Looking for test storage... 00:14:19.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:19.311 06:32:11 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.311 06:32:11 -- nvmf/common.sh@7 -- # uname -s 00:14:19.311 06:32:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.311 06:32:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.311 06:32:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.311 06:32:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.311 06:32:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.311 06:32:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.311 06:32:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.311 06:32:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.311 06:32:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.311 06:32:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.311 06:32:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:19.311 06:32:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:19.311 06:32:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.311 06:32:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.311 06:32:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.311 06:32:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.311 06:32:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.311 06:32:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.311 06:32:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.311 06:32:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.311 06:32:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.311 06:32:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.311 06:32:11 -- paths/export.sh@5 -- # export PATH 00:14:19.311 06:32:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.311 06:32:11 -- nvmf/common.sh@46 -- # : 0 00:14:19.311 06:32:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:19.311 06:32:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:19.311 06:32:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:19.311 06:32:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.311 06:32:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.311 06:32:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:19.311 06:32:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:19.311 06:32:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:19.311 06:32:11 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.311 06:32:11 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.311 06:32:11 -- target/host_management.sh@104 -- # nvmftestinit 00:14:19.311 06:32:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:19.311 06:32:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.311 06:32:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:19.311 06:32:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:19.311 06:32:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:19.311 06:32:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.311 06:32:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.311 06:32:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.311 06:32:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:19.311 06:32:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:19.311 06:32:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:19.311 06:32:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:19.311 06:32:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:19.311 06:32:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:19.312 06:32:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.312 06:32:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.312 06:32:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:19.312 06:32:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:19.312 06:32:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.312 06:32:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.312 06:32:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.312 06:32:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.312 06:32:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.312 06:32:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.312 06:32:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.312 06:32:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.312 06:32:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:19.312 06:32:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:19.312 Cannot find device "nvmf_tgt_br" 00:14:19.312 06:32:11 -- nvmf/common.sh@154 -- # true 00:14:19.312 06:32:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.312 Cannot find device "nvmf_tgt_br2" 00:14:19.312 06:32:11 -- nvmf/common.sh@155 -- # true 00:14:19.312 06:32:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:19.312 06:32:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:19.312 Cannot find device "nvmf_tgt_br" 00:14:19.312 06:32:11 -- nvmf/common.sh@157 -- # true 00:14:19.312 06:32:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:19.312 Cannot find device "nvmf_tgt_br2" 00:14:19.312 06:32:11 -- nvmf/common.sh@158 -- # true 00:14:19.312 06:32:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:19.570 06:32:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:19.570 06:32:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.570 06:32:12 -- nvmf/common.sh@161 -- # true 00:14:19.570 06:32:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.570 06:32:12 -- nvmf/common.sh@162 -- # true 00:14:19.570 06:32:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.570 06:32:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.570 06:32:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.570 06:32:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.570 06:32:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.570 06:32:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.570 06:32:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.570 06:32:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:19.570 06:32:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:19.570 06:32:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:19.570 06:32:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:19.570 06:32:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:19.570 06:32:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:19.570 06:32:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.570 06:32:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.570 06:32:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.570 06:32:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:19.570 06:32:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:19.570 06:32:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.570 06:32:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.570 06:32:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.570 06:32:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.570 06:32:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.570 06:32:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:19.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:14:19.570 00:14:19.570 --- 10.0.0.2 ping statistics --- 00:14:19.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.570 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:19.570 06:32:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:19.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:14:19.570 00:14:19.570 --- 10.0.0.3 ping statistics --- 00:14:19.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.570 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:19.570 06:32:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:14:19.570 00:14:19.570 --- 10.0.0.1 ping statistics --- 00:14:19.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.570 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:14:19.570 06:32:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.570 06:32:12 -- nvmf/common.sh@421 -- # return 0 00:14:19.570 06:32:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.570 06:32:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.570 06:32:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.570 06:32:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.570 06:32:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.570 06:32:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.570 06:32:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.570 06:32:12 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:19.570 06:32:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:19.570 06:32:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.570 06:32:12 -- common/autotest_common.sh@10 -- # set +x 00:14:19.570 ************************************ 00:14:19.570 START TEST nvmf_host_management 00:14:19.570 ************************************ 00:14:19.570 06:32:12 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:14:19.570 06:32:12 -- target/host_management.sh@69 -- # starttarget 00:14:19.570 06:32:12 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:19.570 06:32:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.570 06:32:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.571 06:32:12 -- common/autotest_common.sh@10 -- # set +x 00:14:19.571 06:32:12 -- nvmf/common.sh@469 -- # nvmfpid=82445 00:14:19.571 06:32:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:19.571 06:32:12 -- nvmf/common.sh@470 -- # waitforlisten 82445 00:14:19.571 06:32:12 -- common/autotest_common.sh@819 -- # '[' -z 82445 ']' 00:14:19.571 06:32:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.571 06:32:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.571 06:32:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.571 06:32:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.571 06:32:12 -- common/autotest_common.sh@10 -- # set +x 00:14:19.841 [2024-10-04 06:32:12.286123] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:19.841 [2024-10-04 06:32:12.286225] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.841 [2024-10-04 06:32:12.426885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.131 [2024-10-04 06:32:12.518323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:20.131 [2024-10-04 06:32:12.518509] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.131 [2024-10-04 06:32:12.518525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.131 [2024-10-04 06:32:12.518536] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.131 [2024-10-04 06:32:12.519123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.131 [2024-10-04 06:32:12.519353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.131 [2024-10-04 06:32:12.519428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:20.131 [2024-10-04 06:32:12.519440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.698 06:32:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.698 06:32:13 -- common/autotest_common.sh@852 -- # return 0 00:14:20.698 06:32:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.698 06:32:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:20.698 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:20.698 06:32:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.698 06:32:13 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.698 06:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.698 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:20.698 [2024-10-04 06:32:13.297578] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.698 06:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.698 06:32:13 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:20.698 06:32:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:20.698 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:20.698 06:32:13 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:20.698 06:32:13 -- target/host_management.sh@23 -- # cat 00:14:20.698 06:32:13 -- target/host_management.sh@30 -- # rpc_cmd 00:14:20.698 06:32:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.698 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:20.698 Malloc0 00:14:20.957 [2024-10-04 06:32:13.381190] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.957 06:32:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.957 06:32:13 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:20.957 06:32:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:20.957 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:20.957 06:32:13 -- target/host_management.sh@73 -- # perfpid=82523 00:14:20.957 06:32:13 -- target/host_management.sh@74 -- # waitforlisten 82523 /var/tmp/bdevperf.sock 00:14:20.957 06:32:13 -- common/autotest_common.sh@819 -- # '[' -z 82523 ']' 00:14:20.957 06:32:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.957 06:32:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:20.957 06:32:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.957 06:32:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:20.957 06:32:13 -- common/autotest_common.sh@10 -- # set +x 00:14:20.957 06:32:13 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:20.957 06:32:13 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:20.957 06:32:13 -- nvmf/common.sh@520 -- # config=() 00:14:20.957 06:32:13 -- nvmf/common.sh@520 -- # local subsystem config 00:14:20.957 06:32:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:20.957 06:32:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:20.957 { 00:14:20.957 "params": { 00:14:20.957 "name": "Nvme$subsystem", 00:14:20.957 "trtype": "$TEST_TRANSPORT", 00:14:20.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.957 "adrfam": "ipv4", 00:14:20.957 "trsvcid": "$NVMF_PORT", 00:14:20.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.957 "hdgst": ${hdgst:-false}, 00:14:20.957 "ddgst": ${ddgst:-false} 00:14:20.957 }, 00:14:20.957 "method": "bdev_nvme_attach_controller" 00:14:20.957 } 00:14:20.957 EOF 00:14:20.957 )") 00:14:20.957 06:32:13 -- nvmf/common.sh@542 -- # cat 00:14:20.957 06:32:13 -- nvmf/common.sh@544 -- # jq . 00:14:20.957 06:32:13 -- nvmf/common.sh@545 -- # IFS=, 00:14:20.957 06:32:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:20.957 "params": { 00:14:20.957 "name": "Nvme0", 00:14:20.957 "trtype": "tcp", 00:14:20.957 "traddr": "10.0.0.2", 00:14:20.957 "adrfam": "ipv4", 00:14:20.957 "trsvcid": "4420", 00:14:20.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:20.957 "hdgst": false, 00:14:20.957 "ddgst": false 00:14:20.957 }, 00:14:20.957 "method": "bdev_nvme_attach_controller" 00:14:20.957 }' 00:14:20.957 [2024-10-04 06:32:13.487509] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:20.957 [2024-10-04 06:32:13.487870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82523 ] 00:14:20.957 [2024-10-04 06:32:13.626400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.216 [2024-10-04 06:32:13.704720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.216 Running I/O for 10 seconds... 00:14:22.152 06:32:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.152 06:32:14 -- common/autotest_common.sh@852 -- # return 0 00:14:22.152 06:32:14 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:22.152 06:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.152 06:32:14 -- common/autotest_common.sh@10 -- # set +x 00:14:22.152 06:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.152 06:32:14 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.152 06:32:14 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:22.152 06:32:14 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:22.152 06:32:14 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:22.152 06:32:14 -- target/host_management.sh@52 -- # local ret=1 00:14:22.152 06:32:14 -- target/host_management.sh@53 -- # local i 00:14:22.152 06:32:14 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:22.152 06:32:14 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:22.152 06:32:14 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:22.152 06:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.152 06:32:14 -- common/autotest_common.sh@10 -- # set +x 00:14:22.152 06:32:14 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:22.152 06:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.152 06:32:14 -- target/host_management.sh@55 -- # read_io_count=2154 00:14:22.152 06:32:14 -- target/host_management.sh@58 -- # '[' 2154 -ge 100 ']' 00:14:22.152 06:32:14 -- target/host_management.sh@59 -- # ret=0 00:14:22.152 06:32:14 -- target/host_management.sh@60 -- # break 00:14:22.153 06:32:14 -- target/host_management.sh@64 -- # return 0 00:14:22.153 06:32:14 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:22.153 06:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.153 06:32:14 -- common/autotest_common.sh@10 -- # set +x 00:14:22.153 [2024-10-04 06:32:14.539111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.539570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97530 is same with the state(5) to be set 00:14:22.153 [2024-10-04 06:32:14.540283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.153 [2024-10-04 06:32:14.540525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.153 [2024-10-04 06:32:14.540535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.540985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.540994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.154 [2024-10-04 06:32:14.541316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.154 [2024-10-04 06:32:14.541330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.155 [2024-10-04 06:32:14.541678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.155 [2024-10-04 06:32:14.541759] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12207c0 was disconnected and freed. reset controller. 00:14:22.155 [2024-10-04 06:32:14.542967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:22.155 06:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.155 06:32:14 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:22.155 06:32:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.155 06:32:14 -- common/autotest_common.sh@10 -- # set +x 00:14:22.155 task offset: 42496 on job bdev=Nvme0n1 fails 00:14:22.155 00:14:22.155 Latency(us) 00:14:22.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.155 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:22.155 Job: Nvme0n1 ended in about 0.66 seconds with error 00:14:22.155 Verification LBA range: start 0x0 length 0x400 00:14:22.155 Nvme0n1 : 0.66 3592.40 224.53 97.42 0.00 17056.52 1966.08 23592.96 00:14:22.155 =================================================================================================================== 00:14:22.155 Total : 3592.40 224.53 97.42 0.00 17056.52 1966.08 23592.96 00:14:22.155 [2024-10-04 06:32:14.545028] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:22.155 [2024-10-04 06:32:14.545057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12872e0 (9): Bad file descriptor 00:14:22.155 06:32:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.155 06:32:14 -- target/host_management.sh@87 -- # sleep 1 00:14:22.155 [2024-10-04 06:32:14.554137] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:23.093 06:32:15 -- target/host_management.sh@91 -- # kill -9 82523 00:14:23.093 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82523) - No such process 00:14:23.093 06:32:15 -- target/host_management.sh@91 -- # true 00:14:23.093 06:32:15 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:23.093 06:32:15 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:23.093 06:32:15 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:23.093 06:32:15 -- nvmf/common.sh@520 -- # config=() 00:14:23.093 06:32:15 -- nvmf/common.sh@520 -- # local subsystem config 00:14:23.093 06:32:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:23.093 06:32:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:23.093 { 00:14:23.093 "params": { 00:14:23.093 "name": "Nvme$subsystem", 00:14:23.093 "trtype": "$TEST_TRANSPORT", 00:14:23.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:23.093 "adrfam": "ipv4", 00:14:23.093 "trsvcid": "$NVMF_PORT", 00:14:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:23.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:23.093 "hdgst": ${hdgst:-false}, 00:14:23.093 "ddgst": ${ddgst:-false} 00:14:23.093 }, 00:14:23.093 "method": "bdev_nvme_attach_controller" 00:14:23.093 } 00:14:23.093 EOF 00:14:23.093 )") 00:14:23.093 06:32:15 -- nvmf/common.sh@542 -- # cat 00:14:23.093 06:32:15 -- nvmf/common.sh@544 -- # jq . 00:14:23.093 06:32:15 -- nvmf/common.sh@545 -- # IFS=, 00:14:23.093 06:32:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:23.093 "params": { 00:14:23.093 "name": "Nvme0", 00:14:23.093 "trtype": "tcp", 00:14:23.093 "traddr": "10.0.0.2", 00:14:23.093 "adrfam": "ipv4", 00:14:23.093 "trsvcid": "4420", 00:14:23.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:23.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:23.093 "hdgst": false, 00:14:23.093 "ddgst": false 00:14:23.093 }, 00:14:23.094 "method": "bdev_nvme_attach_controller" 00:14:23.094 }' 00:14:23.094 [2024-10-04 06:32:15.618787] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:23.094 [2024-10-04 06:32:15.618927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82573 ] 00:14:23.094 [2024-10-04 06:32:15.754943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.353 [2024-10-04 06:32:15.817550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.353 Running I/O for 1 seconds... 00:14:24.729 00:14:24.729 Latency(us) 00:14:24.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.729 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:24.729 Verification LBA range: start 0x0 length 0x400 00:14:24.729 Nvme0n1 : 1.01 3738.73 233.67 0.00 0.00 16822.95 1131.99 22878.02 00:14:24.729 =================================================================================================================== 00:14:24.729 Total : 3738.73 233.67 0.00 0.00 16822.95 1131.99 22878.02 00:14:24.729 06:32:17 -- target/host_management.sh@101 -- # stoptarget 00:14:24.729 06:32:17 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:24.729 06:32:17 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:24.729 06:32:17 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:24.729 06:32:17 -- target/host_management.sh@40 -- # nvmftestfini 00:14:24.729 06:32:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:24.729 06:32:17 -- nvmf/common.sh@116 -- # sync 00:14:24.729 06:32:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:24.729 06:32:17 -- nvmf/common.sh@119 -- # set +e 00:14:24.729 06:32:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:24.729 06:32:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:24.729 rmmod nvme_tcp 00:14:24.729 rmmod nvme_fabrics 00:14:24.729 rmmod nvme_keyring 00:14:24.729 06:32:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:24.729 06:32:17 -- nvmf/common.sh@123 -- # set -e 00:14:24.729 06:32:17 -- nvmf/common.sh@124 -- # return 0 00:14:24.729 06:32:17 -- nvmf/common.sh@477 -- # '[' -n 82445 ']' 00:14:24.729 06:32:17 -- nvmf/common.sh@478 -- # killprocess 82445 00:14:24.729 06:32:17 -- common/autotest_common.sh@926 -- # '[' -z 82445 ']' 00:14:24.729 06:32:17 -- common/autotest_common.sh@930 -- # kill -0 82445 00:14:24.729 06:32:17 -- common/autotest_common.sh@931 -- # uname 00:14:24.729 06:32:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:24.729 06:32:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82445 00:14:24.729 06:32:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:24.729 06:32:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:24.729 killing process with pid 82445 00:14:24.730 06:32:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82445' 00:14:24.730 06:32:17 -- common/autotest_common.sh@945 -- # kill 82445 00:14:24.730 06:32:17 -- common/autotest_common.sh@950 -- # wait 82445 00:14:24.988 [2024-10-04 06:32:17.617802] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:24.988 06:32:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:24.988 06:32:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:24.988 06:32:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:24.988 06:32:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.988 06:32:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:24.988 06:32:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.988 06:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.988 06:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.246 06:32:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:25.246 00:14:25.246 real 0m5.455s 00:14:25.246 user 0m22.650s 00:14:25.246 sys 0m1.355s 00:14:25.246 06:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.246 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:14:25.246 ************************************ 00:14:25.246 END TEST nvmf_host_management 00:14:25.246 ************************************ 00:14:25.246 06:32:17 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:25.246 00:14:25.246 real 0m5.914s 00:14:25.246 user 0m22.758s 00:14:25.246 sys 0m1.590s 00:14:25.246 06:32:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.246 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:14:25.246 ************************************ 00:14:25.246 END TEST nvmf_host_management 00:14:25.246 ************************************ 00:14:25.246 06:32:17 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:25.246 06:32:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:25.246 06:32:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:25.246 06:32:17 -- common/autotest_common.sh@10 -- # set +x 00:14:25.246 ************************************ 00:14:25.246 START TEST nvmf_lvol 00:14:25.246 ************************************ 00:14:25.246 06:32:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:25.246 * Looking for test storage... 00:14:25.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:25.246 06:32:17 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.246 06:32:17 -- nvmf/common.sh@7 -- # uname -s 00:14:25.246 06:32:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.246 06:32:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.246 06:32:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.246 06:32:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.246 06:32:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.246 06:32:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.246 06:32:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.246 06:32:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.246 06:32:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.247 06:32:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.247 06:32:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:25.247 06:32:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:25.247 06:32:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.247 06:32:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.247 06:32:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.247 06:32:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.247 06:32:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.247 06:32:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.247 06:32:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.247 06:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.247 06:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.247 06:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.247 06:32:17 -- paths/export.sh@5 -- # export PATH 00:14:25.247 06:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.247 06:32:17 -- nvmf/common.sh@46 -- # : 0 00:14:25.247 06:32:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:25.247 06:32:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:25.247 06:32:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:25.247 06:32:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.247 06:32:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.247 06:32:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:25.247 06:32:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:25.247 06:32:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:25.247 06:32:17 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.247 06:32:17 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.247 06:32:17 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:25.247 06:32:17 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:25.247 06:32:17 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.247 06:32:17 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:25.247 06:32:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:25.247 06:32:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.247 06:32:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:25.247 06:32:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:25.247 06:32:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:25.247 06:32:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.247 06:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.247 06:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.247 06:32:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:25.247 06:32:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:25.247 06:32:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:25.247 06:32:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:25.247 06:32:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:25.247 06:32:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:25.247 06:32:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.247 06:32:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.247 06:32:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:25.247 06:32:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:25.247 06:32:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:25.247 06:32:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:25.247 06:32:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:25.247 06:32:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.247 06:32:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:25.247 06:32:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:25.247 06:32:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:25.247 06:32:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:25.247 06:32:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:25.247 06:32:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:25.247 Cannot find device "nvmf_tgt_br" 00:14:25.247 06:32:17 -- nvmf/common.sh@154 -- # true 00:14:25.247 06:32:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.247 Cannot find device "nvmf_tgt_br2" 00:14:25.247 06:32:17 -- nvmf/common.sh@155 -- # true 00:14:25.247 06:32:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:25.247 06:32:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:25.505 Cannot find device "nvmf_tgt_br" 00:14:25.505 06:32:17 -- nvmf/common.sh@157 -- # true 00:14:25.505 06:32:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:25.505 Cannot find device "nvmf_tgt_br2" 00:14:25.505 06:32:17 -- nvmf/common.sh@158 -- # true 00:14:25.505 06:32:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:25.505 06:32:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:25.505 06:32:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.505 06:32:18 -- nvmf/common.sh@161 -- # true 00:14:25.505 06:32:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.505 06:32:18 -- nvmf/common.sh@162 -- # true 00:14:25.505 06:32:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:25.505 06:32:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:25.505 06:32:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:25.505 06:32:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:25.505 06:32:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:25.505 06:32:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:25.505 06:32:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:25.505 06:32:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:25.505 06:32:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:25.505 06:32:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:25.505 06:32:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:25.505 06:32:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:25.505 06:32:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:25.505 06:32:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:25.505 06:32:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:25.505 06:32:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:25.505 06:32:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:25.505 06:32:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:25.505 06:32:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:25.505 06:32:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:25.505 06:32:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:25.505 06:32:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:25.505 06:32:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:25.505 06:32:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:25.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:25.505 00:14:25.505 --- 10.0.0.2 ping statistics --- 00:14:25.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.505 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:25.505 06:32:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:25.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:25.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:25.505 00:14:25.505 --- 10.0.0.3 ping statistics --- 00:14:25.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.505 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:25.505 06:32:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:25.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:25.505 00:14:25.505 --- 10.0.0.1 ping statistics --- 00:14:25.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.505 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:25.505 06:32:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.505 06:32:18 -- nvmf/common.sh@421 -- # return 0 00:14:25.505 06:32:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:25.505 06:32:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.505 06:32:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:25.505 06:32:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:25.505 06:32:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.506 06:32:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:25.506 06:32:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:25.764 06:32:18 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:25.764 06:32:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:25.764 06:32:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:25.764 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.764 06:32:18 -- nvmf/common.sh@469 -- # nvmfpid=82804 00:14:25.764 06:32:18 -- nvmf/common.sh@470 -- # waitforlisten 82804 00:14:25.764 06:32:18 -- common/autotest_common.sh@819 -- # '[' -z 82804 ']' 00:14:25.764 06:32:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:25.764 06:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.764 06:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:25.764 06:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.764 06:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:25.764 06:32:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.764 [2024-10-04 06:32:18.253619] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:25.764 [2024-10-04 06:32:18.253715] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.764 [2024-10-04 06:32:18.388082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:26.022 [2024-10-04 06:32:18.455705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:26.022 [2024-10-04 06:32:18.455884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.023 [2024-10-04 06:32:18.455898] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.023 [2024-10-04 06:32:18.455906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.023 [2024-10-04 06:32:18.456089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.023 [2024-10-04 06:32:18.456247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.023 [2024-10-04 06:32:18.456256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.589 06:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:26.589 06:32:19 -- common/autotest_common.sh@852 -- # return 0 00:14:26.589 06:32:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:26.589 06:32:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:26.589 06:32:19 -- common/autotest_common.sh@10 -- # set +x 00:14:26.847 06:32:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.847 06:32:19 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:26.847 [2024-10-04 06:32:19.476963] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.847 06:32:19 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.105 06:32:19 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:27.105 06:32:19 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.363 06:32:20 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:27.363 06:32:20 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:27.621 06:32:20 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:27.879 06:32:20 -- target/nvmf_lvol.sh@29 -- # lvs=10abb02b-d2c2-4b21-b555-b7de5d7f757f 00:14:27.879 06:32:20 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 10abb02b-d2c2-4b21-b555-b7de5d7f757f lvol 20 00:14:28.136 06:32:20 -- target/nvmf_lvol.sh@32 -- # lvol=5459a70c-d488-4fd7-9f31-687b321b5514 00:14:28.136 06:32:20 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:28.394 06:32:21 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5459a70c-d488-4fd7-9f31-687b321b5514 00:14:28.652 06:32:21 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:28.909 [2024-10-04 06:32:21.532476] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.909 06:32:21 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.167 06:32:21 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:29.167 06:32:21 -- target/nvmf_lvol.sh@42 -- # perf_pid=82953 00:14:29.167 06:32:21 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:30.102 06:32:22 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 5459a70c-d488-4fd7-9f31-687b321b5514 MY_SNAPSHOT 00:14:30.668 06:32:23 -- target/nvmf_lvol.sh@47 -- # snapshot=437f06ad-4ead-4400-934e-3b182b483da0 00:14:30.668 06:32:23 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 5459a70c-d488-4fd7-9f31-687b321b5514 30 00:14:30.929 06:32:23 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 437f06ad-4ead-4400-934e-3b182b483da0 MY_CLONE 00:14:31.189 06:32:23 -- target/nvmf_lvol.sh@49 -- # clone=3db0302d-d7f3-42d8-bf6b-77b8b1fff066 00:14:31.189 06:32:23 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 3db0302d-d7f3-42d8-bf6b-77b8b1fff066 00:14:32.123 06:32:24 -- target/nvmf_lvol.sh@53 -- # wait 82953 00:14:40.266 Initializing NVMe Controllers 00:14:40.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:40.266 Controller IO queue size 128, less than required. 00:14:40.266 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:40.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:40.266 Initialization complete. Launching workers. 00:14:40.266 ======================================================== 00:14:40.266 Latency(us) 00:14:40.266 Device Information : IOPS MiB/s Average min max 00:14:40.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7853.50 30.68 16300.30 677.18 85138.91 00:14:40.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6909.30 26.99 18531.39 3360.53 92205.36 00:14:40.266 ======================================================== 00:14:40.266 Total : 14762.80 57.67 17344.50 677.18 92205.36 00:14:40.266 00:14:40.266 06:32:32 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:40.266 06:32:32 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5459a70c-d488-4fd7-9f31-687b321b5514 00:14:40.266 06:32:32 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10abb02b-d2c2-4b21-b555-b7de5d7f757f 00:14:40.525 06:32:32 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:40.525 06:32:32 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:40.525 06:32:32 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:40.525 06:32:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:40.525 06:32:32 -- nvmf/common.sh@116 -- # sync 00:14:40.525 06:32:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:40.525 06:32:33 -- nvmf/common.sh@119 -- # set +e 00:14:40.525 06:32:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:40.525 06:32:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:40.525 rmmod nvme_tcp 00:14:40.525 rmmod nvme_fabrics 00:14:40.525 rmmod nvme_keyring 00:14:40.526 06:32:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:40.526 06:32:33 -- nvmf/common.sh@123 -- # set -e 00:14:40.526 06:32:33 -- nvmf/common.sh@124 -- # return 0 00:14:40.526 06:32:33 -- nvmf/common.sh@477 -- # '[' -n 82804 ']' 00:14:40.526 06:32:33 -- nvmf/common.sh@478 -- # killprocess 82804 00:14:40.526 06:32:33 -- common/autotest_common.sh@926 -- # '[' -z 82804 ']' 00:14:40.526 06:32:33 -- common/autotest_common.sh@930 -- # kill -0 82804 00:14:40.526 06:32:33 -- common/autotest_common.sh@931 -- # uname 00:14:40.526 06:32:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:40.526 06:32:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82804 00:14:40.526 killing process with pid 82804 00:14:40.526 06:32:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:40.526 06:32:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:40.526 06:32:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82804' 00:14:40.526 06:32:33 -- common/autotest_common.sh@945 -- # kill 82804 00:14:40.526 06:32:33 -- common/autotest_common.sh@950 -- # wait 82804 00:14:40.784 06:32:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:40.784 06:32:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:40.784 06:32:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:40.784 06:32:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.784 06:32:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:40.785 06:32:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.785 06:32:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.785 06:32:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.785 06:32:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:40.785 00:14:40.785 real 0m15.652s 00:14:40.785 user 1m5.852s 00:14:40.785 sys 0m3.732s 00:14:40.785 06:32:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.785 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:14:40.785 ************************************ 00:14:40.785 END TEST nvmf_lvol 00:14:40.785 ************************************ 00:14:41.046 06:32:33 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:41.046 06:32:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:41.046 06:32:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:41.046 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:14:41.046 ************************************ 00:14:41.046 START TEST nvmf_lvs_grow 00:14:41.046 ************************************ 00:14:41.046 06:32:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:41.046 * Looking for test storage... 00:14:41.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.046 06:32:33 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.046 06:32:33 -- nvmf/common.sh@7 -- # uname -s 00:14:41.046 06:32:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.046 06:32:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.046 06:32:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.046 06:32:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.046 06:32:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.046 06:32:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.046 06:32:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.046 06:32:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.046 06:32:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.046 06:32:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.046 06:32:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:41.046 06:32:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:14:41.046 06:32:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.046 06:32:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.046 06:32:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.046 06:32:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.046 06:32:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.046 06:32:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.046 06:32:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.046 06:32:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.046 06:32:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.046 06:32:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.046 06:32:33 -- paths/export.sh@5 -- # export PATH 00:14:41.046 06:32:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.046 06:32:33 -- nvmf/common.sh@46 -- # : 0 00:14:41.046 06:32:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:41.046 06:32:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:41.046 06:32:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:41.046 06:32:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.047 06:32:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.047 06:32:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:41.047 06:32:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:41.047 06:32:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:41.047 06:32:33 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.047 06:32:33 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.047 06:32:33 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:41.047 06:32:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:41.047 06:32:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.047 06:32:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:41.047 06:32:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:41.047 06:32:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:41.047 06:32:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.047 06:32:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.047 06:32:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.047 06:32:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:41.047 06:32:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:41.047 06:32:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:41.047 06:32:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:41.047 06:32:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:41.047 06:32:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:41.047 06:32:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.047 06:32:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.047 06:32:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:41.047 06:32:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:41.047 06:32:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.047 06:32:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.047 06:32:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.047 06:32:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.047 06:32:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.047 06:32:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.047 06:32:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.047 06:32:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.047 06:32:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:41.047 06:32:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:41.047 Cannot find device "nvmf_tgt_br" 00:14:41.047 06:32:33 -- nvmf/common.sh@154 -- # true 00:14:41.047 06:32:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.047 Cannot find device "nvmf_tgt_br2" 00:14:41.047 06:32:33 -- nvmf/common.sh@155 -- # true 00:14:41.047 06:32:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:41.047 06:32:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:41.047 Cannot find device "nvmf_tgt_br" 00:14:41.047 06:32:33 -- nvmf/common.sh@157 -- # true 00:14:41.047 06:32:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:41.047 Cannot find device "nvmf_tgt_br2" 00:14:41.047 06:32:33 -- nvmf/common.sh@158 -- # true 00:14:41.047 06:32:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:41.047 06:32:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:41.047 06:32:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.306 06:32:33 -- nvmf/common.sh@161 -- # true 00:14:41.306 06:32:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.306 06:32:33 -- nvmf/common.sh@162 -- # true 00:14:41.306 06:32:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.306 06:32:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.306 06:32:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.306 06:32:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.306 06:32:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.306 06:32:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.306 06:32:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.306 06:32:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:41.306 06:32:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:41.306 06:32:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:41.306 06:32:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:41.306 06:32:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:41.306 06:32:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:41.306 06:32:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.306 06:32:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.306 06:32:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.306 06:32:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:41.306 06:32:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:41.306 06:32:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.306 06:32:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.306 06:32:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.306 06:32:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.306 06:32:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.306 06:32:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:41.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:41.306 00:14:41.306 --- 10.0.0.2 ping statistics --- 00:14:41.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.306 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:41.306 06:32:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:41.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:14:41.306 00:14:41.306 --- 10.0.0.3 ping statistics --- 00:14:41.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.306 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:41.306 06:32:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:14:41.306 00:14:41.306 --- 10.0.0.1 ping statistics --- 00:14:41.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.306 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:41.306 06:32:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.306 06:32:33 -- nvmf/common.sh@421 -- # return 0 00:14:41.306 06:32:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:41.306 06:32:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.306 06:32:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:41.306 06:32:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:41.306 06:32:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.306 06:32:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:41.306 06:32:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:41.306 06:32:33 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:41.306 06:32:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:41.306 06:32:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:41.306 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:14:41.306 06:32:33 -- nvmf/common.sh@469 -- # nvmfpid=83314 00:14:41.306 06:32:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:41.306 06:32:33 -- nvmf/common.sh@470 -- # waitforlisten 83314 00:14:41.306 06:32:33 -- common/autotest_common.sh@819 -- # '[' -z 83314 ']' 00:14:41.306 06:32:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.306 06:32:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.306 06:32:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.306 06:32:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.306 06:32:33 -- common/autotest_common.sh@10 -- # set +x 00:14:41.566 [2024-10-04 06:32:34.034525] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:41.566 [2024-10-04 06:32:34.034611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.566 [2024-10-04 06:32:34.172467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.824 [2024-10-04 06:32:34.245775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.824 [2024-10-04 06:32:34.245968] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.824 [2024-10-04 06:32:34.245982] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.824 [2024-10-04 06:32:34.245991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.824 [2024-10-04 06:32:34.246031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.392 06:32:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.392 06:32:35 -- common/autotest_common.sh@852 -- # return 0 00:14:42.392 06:32:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:42.392 06:32:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:42.392 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:14:42.651 06:32:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.651 06:32:35 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:42.911 [2024-10-04 06:32:35.346651] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:42.911 06:32:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:42.911 06:32:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.911 06:32:35 -- common/autotest_common.sh@10 -- # set +x 00:14:42.911 ************************************ 00:14:42.911 START TEST lvs_grow_clean 00:14:42.911 ************************************ 00:14:42.911 06:32:35 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:42.911 06:32:35 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.170 06:32:35 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:43.170 06:32:35 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:43.428 06:32:35 -- target/nvmf_lvs_grow.sh@28 -- # lvs=18cb3912-069c-4e30-a773-7997ab825d25 00:14:43.428 06:32:35 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:43.428 06:32:35 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:43.687 06:32:36 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:43.687 06:32:36 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:43.687 06:32:36 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 18cb3912-069c-4e30-a773-7997ab825d25 lvol 150 00:14:43.946 06:32:36 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a77688d-ada9-4054-9a99-c0e8a711fa56 00:14:43.946 06:32:36 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:43.946 06:32:36 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:44.205 [2024-10-04 06:32:36.786721] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:44.205 [2024-10-04 06:32:36.786817] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:44.205 true 00:14:44.205 06:32:36 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:44.205 06:32:36 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:44.464 06:32:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:44.464 06:32:37 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:44.722 06:32:37 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a77688d-ada9-4054-9a99-c0e8a711fa56 00:14:44.981 06:32:37 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:45.240 [2024-10-04 06:32:37.807439] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.240 06:32:37 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.499 06:32:38 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83477 00:14:45.499 06:32:38 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.499 06:32:38 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:45.499 06:32:38 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83477 /var/tmp/bdevperf.sock 00:14:45.499 06:32:38 -- common/autotest_common.sh@819 -- # '[' -z 83477 ']' 00:14:45.499 06:32:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.499 06:32:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.499 06:32:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.499 06:32:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.499 06:32:38 -- common/autotest_common.sh@10 -- # set +x 00:14:45.499 [2024-10-04 06:32:38.141376] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:14:45.499 [2024-10-04 06:32:38.141480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83477 ] 00:14:45.758 [2024-10-04 06:32:38.274844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.758 [2024-10-04 06:32:38.349685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.694 06:32:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.694 06:32:39 -- common/autotest_common.sh@852 -- # return 0 00:14:46.694 06:32:39 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:46.694 Nvme0n1 00:14:46.953 06:32:39 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:46.953 [ 00:14:46.953 { 00:14:46.953 "aliases": [ 00:14:46.953 "1a77688d-ada9-4054-9a99-c0e8a711fa56" 00:14:46.953 ], 00:14:46.953 "assigned_rate_limits": { 00:14:46.953 "r_mbytes_per_sec": 0, 00:14:46.953 "rw_ios_per_sec": 0, 00:14:46.953 "rw_mbytes_per_sec": 0, 00:14:46.953 "w_mbytes_per_sec": 0 00:14:46.953 }, 00:14:46.953 "block_size": 4096, 00:14:46.953 "claimed": false, 00:14:46.953 "driver_specific": { 00:14:46.953 "mp_policy": "active_passive", 00:14:46.953 "nvme": [ 00:14:46.953 { 00:14:46.953 "ctrlr_data": { 00:14:46.953 "ana_reporting": false, 00:14:46.953 "cntlid": 1, 00:14:46.953 "firmware_revision": "24.01.1", 00:14:46.953 "model_number": "SPDK bdev Controller", 00:14:46.953 "multi_ctrlr": true, 00:14:46.953 "oacs": { 00:14:46.953 "firmware": 0, 00:14:46.953 "format": 0, 00:14:46.953 "ns_manage": 0, 00:14:46.953 "security": 0 00:14:46.953 }, 00:14:46.953 "serial_number": "SPDK0", 00:14:46.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.953 "vendor_id": "0x8086" 00:14:46.953 }, 00:14:46.953 "ns_data": { 00:14:46.953 "can_share": true, 00:14:46.953 "id": 1 00:14:46.953 }, 00:14:46.953 "trid": { 00:14:46.953 "adrfam": "IPv4", 00:14:46.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:46.953 "traddr": "10.0.0.2", 00:14:46.953 "trsvcid": "4420", 00:14:46.953 "trtype": "TCP" 00:14:46.953 }, 00:14:46.953 "vs": { 00:14:46.953 "nvme_version": "1.3" 00:14:46.953 } 00:14:46.953 } 00:14:46.953 ] 00:14:46.953 }, 00:14:46.953 "name": "Nvme0n1", 00:14:46.953 "num_blocks": 38912, 00:14:46.953 "product_name": "NVMe disk", 00:14:46.953 "supported_io_types": { 00:14:46.953 "abort": true, 00:14:46.953 "compare": true, 00:14:46.953 "compare_and_write": true, 00:14:46.953 "flush": true, 00:14:46.953 "nvme_admin": true, 00:14:46.953 "nvme_io": true, 00:14:46.953 "read": true, 00:14:46.953 "reset": true, 00:14:46.953 "unmap": true, 00:14:46.953 "write": true, 00:14:46.953 "write_zeroes": true 00:14:46.953 }, 00:14:46.953 "uuid": "1a77688d-ada9-4054-9a99-c0e8a711fa56", 00:14:46.953 "zoned": false 00:14:46.953 } 00:14:46.953 ] 00:14:47.212 06:32:39 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83524 00:14:47.212 06:32:39 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.212 06:32:39 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:47.212 Running I/O for 10 seconds... 00:14:48.151 Latency(us) 00:14:48.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.151 Nvme0n1 : 1.00 7116.00 27.80 0.00 0.00 0.00 0.00 0.00 00:14:48.151 =================================================================================================================== 00:14:48.151 Total : 7116.00 27.80 0.00 0.00 0.00 0.00 0.00 00:14:48.151 00:14:49.088 06:32:41 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:49.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.088 Nvme0n1 : 2.00 7175.00 28.03 0.00 0.00 0.00 0.00 0.00 00:14:49.088 =================================================================================================================== 00:14:49.088 Total : 7175.00 28.03 0.00 0.00 0.00 0.00 0.00 00:14:49.088 00:14:49.347 true 00:14:49.347 06:32:41 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:49.347 06:32:41 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:49.612 06:32:42 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:49.612 06:32:42 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:49.612 06:32:42 -- target/nvmf_lvs_grow.sh@65 -- # wait 83524 00:14:50.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.179 Nvme0n1 : 3.00 7142.67 27.90 0.00 0.00 0.00 0.00 0.00 00:14:50.179 =================================================================================================================== 00:14:50.179 Total : 7142.67 27.90 0.00 0.00 0.00 0.00 0.00 00:14:50.179 00:14:51.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.114 Nvme0n1 : 4.00 7146.75 27.92 0.00 0.00 0.00 0.00 0.00 00:14:51.114 =================================================================================================================== 00:14:51.114 Total : 7146.75 27.92 0.00 0.00 0.00 0.00 0.00 00:14:51.114 00:14:52.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.104 Nvme0n1 : 5.00 7104.40 27.75 0.00 0.00 0.00 0.00 0.00 00:14:52.104 =================================================================================================================== 00:14:52.104 Total : 7104.40 27.75 0.00 0.00 0.00 0.00 0.00 00:14:52.104 00:14:53.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.482 Nvme0n1 : 6.00 7068.00 27.61 0.00 0.00 0.00 0.00 0.00 00:14:53.482 =================================================================================================================== 00:14:53.482 Total : 7068.00 27.61 0.00 0.00 0.00 0.00 0.00 00:14:53.482 00:14:54.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.419 Nvme0n1 : 7.00 7044.57 27.52 0.00 0.00 0.00 0.00 0.00 00:14:54.419 =================================================================================================================== 00:14:54.419 Total : 7044.57 27.52 0.00 0.00 0.00 0.00 0.00 00:14:54.419 00:14:55.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.353 Nvme0n1 : 8.00 7035.25 27.48 0.00 0.00 0.00 0.00 0.00 00:14:55.353 =================================================================================================================== 00:14:55.353 Total : 7035.25 27.48 0.00 0.00 0.00 0.00 0.00 00:14:55.353 00:14:56.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.286 Nvme0n1 : 9.00 7028.89 27.46 0.00 0.00 0.00 0.00 0.00 00:14:56.286 =================================================================================================================== 00:14:56.286 Total : 7028.89 27.46 0.00 0.00 0.00 0.00 0.00 00:14:56.286 00:14:57.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.221 Nvme0n1 : 10.00 7001.30 27.35 0.00 0.00 0.00 0.00 0.00 00:14:57.221 =================================================================================================================== 00:14:57.221 Total : 7001.30 27.35 0.00 0.00 0.00 0.00 0.00 00:14:57.221 00:14:57.221 00:14:57.221 Latency(us) 00:14:57.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.221 Nvme0n1 : 10.01 7010.52 27.38 0.00 0.00 18253.95 8519.68 42181.35 00:14:57.221 =================================================================================================================== 00:14:57.221 Total : 7010.52 27.38 0.00 0.00 18253.95 8519.68 42181.35 00:14:57.221 0 00:14:57.221 06:32:49 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83477 00:14:57.221 06:32:49 -- common/autotest_common.sh@926 -- # '[' -z 83477 ']' 00:14:57.221 06:32:49 -- common/autotest_common.sh@930 -- # kill -0 83477 00:14:57.221 06:32:49 -- common/autotest_common.sh@931 -- # uname 00:14:57.221 06:32:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.221 06:32:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83477 00:14:57.221 06:32:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:57.222 06:32:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:57.222 killing process with pid 83477 00:14:57.222 06:32:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83477' 00:14:57.222 06:32:49 -- common/autotest_common.sh@945 -- # kill 83477 00:14:57.222 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.222 00:14:57.222 Latency(us) 00:14:57.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.222 =================================================================================================================== 00:14:57.222 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.222 06:32:49 -- common/autotest_common.sh@950 -- # wait 83477 00:14:57.480 06:32:50 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:57.738 06:32:50 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:57.738 06:32:50 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:57.997 06:32:50 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:57.997 06:32:50 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:57.997 06:32:50 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:58.255 [2024-10-04 06:32:50.811647] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:58.255 06:32:50 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:58.255 06:32:50 -- common/autotest_common.sh@640 -- # local es=0 00:14:58.255 06:32:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:58.255 06:32:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.255 06:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:58.255 06:32:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.255 06:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:58.255 06:32:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.255 06:32:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:58.255 06:32:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.255 06:32:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:58.255 06:32:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:58.514 2024/10/04 06:32:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:18cb3912-069c-4e30-a773-7997ab825d25], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:58.514 request: 00:14:58.514 { 00:14:58.514 "method": "bdev_lvol_get_lvstores", 00:14:58.514 "params": { 00:14:58.514 "uuid": "18cb3912-069c-4e30-a773-7997ab825d25" 00:14:58.514 } 00:14:58.514 } 00:14:58.514 Got JSON-RPC error response 00:14:58.514 GoRPCClient: error on JSON-RPC call 00:14:58.514 06:32:51 -- common/autotest_common.sh@643 -- # es=1 00:14:58.514 06:32:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:58.514 06:32:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:58.514 06:32:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:58.514 06:32:51 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:58.772 aio_bdev 00:14:58.772 06:32:51 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1a77688d-ada9-4054-9a99-c0e8a711fa56 00:14:58.772 06:32:51 -- common/autotest_common.sh@887 -- # local bdev_name=1a77688d-ada9-4054-9a99-c0e8a711fa56 00:14:58.772 06:32:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:58.772 06:32:51 -- common/autotest_common.sh@889 -- # local i 00:14:58.772 06:32:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:58.772 06:32:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:58.772 06:32:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:59.030 06:32:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a77688d-ada9-4054-9a99-c0e8a711fa56 -t 2000 00:14:59.030 [ 00:14:59.030 { 00:14:59.030 "aliases": [ 00:14:59.030 "lvs/lvol" 00:14:59.030 ], 00:14:59.030 "assigned_rate_limits": { 00:14:59.030 "r_mbytes_per_sec": 0, 00:14:59.030 "rw_ios_per_sec": 0, 00:14:59.030 "rw_mbytes_per_sec": 0, 00:14:59.030 "w_mbytes_per_sec": 0 00:14:59.030 }, 00:14:59.030 "block_size": 4096, 00:14:59.030 "claimed": false, 00:14:59.030 "driver_specific": { 00:14:59.030 "lvol": { 00:14:59.030 "base_bdev": "aio_bdev", 00:14:59.030 "clone": false, 00:14:59.030 "esnap_clone": false, 00:14:59.030 "lvol_store_uuid": "18cb3912-069c-4e30-a773-7997ab825d25", 00:14:59.030 "snapshot": false, 00:14:59.030 "thin_provision": false 00:14:59.030 } 00:14:59.030 }, 00:14:59.030 "name": "1a77688d-ada9-4054-9a99-c0e8a711fa56", 00:14:59.030 "num_blocks": 38912, 00:14:59.030 "product_name": "Logical Volume", 00:14:59.030 "supported_io_types": { 00:14:59.030 "abort": false, 00:14:59.030 "compare": false, 00:14:59.030 "compare_and_write": false, 00:14:59.030 "flush": false, 00:14:59.030 "nvme_admin": false, 00:14:59.030 "nvme_io": false, 00:14:59.030 "read": true, 00:14:59.030 "reset": true, 00:14:59.030 "unmap": true, 00:14:59.030 "write": true, 00:14:59.030 "write_zeroes": true 00:14:59.030 }, 00:14:59.030 "uuid": "1a77688d-ada9-4054-9a99-c0e8a711fa56", 00:14:59.030 "zoned": false 00:14:59.030 } 00:14:59.030 ] 00:14:59.030 06:32:51 -- common/autotest_common.sh@895 -- # return 0 00:14:59.030 06:32:51 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:59.030 06:32:51 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:59.595 06:32:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:59.595 06:32:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18cb3912-069c-4e30-a773-7997ab825d25 00:14:59.595 06:32:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:59.853 06:32:52 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:59.853 06:32:52 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1a77688d-ada9-4054-9a99-c0e8a711fa56 00:15:00.110 06:32:52 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18cb3912-069c-4e30-a773-7997ab825d25 00:15:00.368 06:32:52 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:00.368 06:32:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:00.935 00:15:00.935 real 0m18.012s 00:15:00.935 user 0m17.373s 00:15:00.935 sys 0m2.152s 00:15:00.935 06:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.935 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:15:00.935 ************************************ 00:15:00.935 END TEST lvs_grow_clean 00:15:00.935 ************************************ 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:00.935 06:32:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:00.935 06:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.935 06:32:53 -- common/autotest_common.sh@10 -- # set +x 00:15:00.935 ************************************ 00:15:00.935 START TEST lvs_grow_dirty 00:15:00.935 ************************************ 00:15:00.935 06:32:53 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:00.935 06:32:53 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:01.193 06:32:53 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:01.193 06:32:53 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:01.453 06:32:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:01.453 06:32:54 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:01.453 06:32:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:01.712 06:32:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:01.712 06:32:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:01.712 06:32:54 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 lvol 150 00:15:01.972 06:32:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8c44811e-2b59-43a5-80e3-3fa549b7e681 00:15:01.972 06:32:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:01.972 06:32:54 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:02.231 [2024-10-04 06:32:54.747634] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:02.231 [2024-10-04 06:32:54.747714] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:02.231 true 00:15:02.231 06:32:54 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:02.231 06:32:54 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:02.489 06:32:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:02.489 06:32:55 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:02.748 06:32:55 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c44811e-2b59-43a5-80e3-3fa549b7e681 00:15:03.007 06:32:55 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:03.274 06:32:55 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:03.538 06:32:55 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:03.538 06:32:55 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83912 00:15:03.538 06:32:55 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.538 06:32:55 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83912 /var/tmp/bdevperf.sock 00:15:03.538 06:32:55 -- common/autotest_common.sh@819 -- # '[' -z 83912 ']' 00:15:03.538 06:32:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.538 06:32:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:03.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.538 06:32:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.538 06:32:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:03.538 06:32:55 -- common/autotest_common.sh@10 -- # set +x 00:15:03.538 [2024-10-04 06:32:55.998937] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:03.538 [2024-10-04 06:32:55.999059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83912 ] 00:15:03.538 [2024-10-04 06:32:56.134570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.538 [2024-10-04 06:32:56.211759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.475 06:32:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:04.475 06:32:56 -- common/autotest_common.sh@852 -- # return 0 00:15:04.475 06:32:56 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:04.734 Nvme0n1 00:15:04.734 06:32:57 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:04.993 [ 00:15:04.993 { 00:15:04.993 "aliases": [ 00:15:04.993 "8c44811e-2b59-43a5-80e3-3fa549b7e681" 00:15:04.993 ], 00:15:04.993 "assigned_rate_limits": { 00:15:04.993 "r_mbytes_per_sec": 0, 00:15:04.993 "rw_ios_per_sec": 0, 00:15:04.993 "rw_mbytes_per_sec": 0, 00:15:04.993 "w_mbytes_per_sec": 0 00:15:04.993 }, 00:15:04.993 "block_size": 4096, 00:15:04.993 "claimed": false, 00:15:04.993 "driver_specific": { 00:15:04.993 "mp_policy": "active_passive", 00:15:04.993 "nvme": [ 00:15:04.993 { 00:15:04.993 "ctrlr_data": { 00:15:04.993 "ana_reporting": false, 00:15:04.993 "cntlid": 1, 00:15:04.993 "firmware_revision": "24.01.1", 00:15:04.993 "model_number": "SPDK bdev Controller", 00:15:04.993 "multi_ctrlr": true, 00:15:04.993 "oacs": { 00:15:04.993 "firmware": 0, 00:15:04.993 "format": 0, 00:15:04.993 "ns_manage": 0, 00:15:04.993 "security": 0 00:15:04.993 }, 00:15:04.993 "serial_number": "SPDK0", 00:15:04.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:04.993 "vendor_id": "0x8086" 00:15:04.993 }, 00:15:04.993 "ns_data": { 00:15:04.993 "can_share": true, 00:15:04.993 "id": 1 00:15:04.993 }, 00:15:04.993 "trid": { 00:15:04.993 "adrfam": "IPv4", 00:15:04.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:04.993 "traddr": "10.0.0.2", 00:15:04.993 "trsvcid": "4420", 00:15:04.993 "trtype": "TCP" 00:15:04.993 }, 00:15:04.993 "vs": { 00:15:04.993 "nvme_version": "1.3" 00:15:04.993 } 00:15:04.993 } 00:15:04.993 ] 00:15:04.993 }, 00:15:04.993 "name": "Nvme0n1", 00:15:04.993 "num_blocks": 38912, 00:15:04.993 "product_name": "NVMe disk", 00:15:04.993 "supported_io_types": { 00:15:04.993 "abort": true, 00:15:04.993 "compare": true, 00:15:04.993 "compare_and_write": true, 00:15:04.993 "flush": true, 00:15:04.993 "nvme_admin": true, 00:15:04.993 "nvme_io": true, 00:15:04.993 "read": true, 00:15:04.993 "reset": true, 00:15:04.993 "unmap": true, 00:15:04.993 "write": true, 00:15:04.993 "write_zeroes": true 00:15:04.993 }, 00:15:04.993 "uuid": "8c44811e-2b59-43a5-80e3-3fa549b7e681", 00:15:04.993 "zoned": false 00:15:04.993 } 00:15:04.993 ] 00:15:04.993 06:32:57 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.993 06:32:57 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83960 00:15:04.993 06:32:57 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:04.993 Running I/O for 10 seconds... 00:15:05.929 Latency(us) 00:15:05.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.930 Nvme0n1 : 1.00 7261.00 28.36 0.00 0.00 0.00 0.00 0.00 00:15:05.930 =================================================================================================================== 00:15:05.930 Total : 7261.00 28.36 0.00 0.00 0.00 0.00 0.00 00:15:05.930 00:15:06.876 06:32:59 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:07.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.162 Nvme0n1 : 2.00 7232.00 28.25 0.00 0.00 0.00 0.00 0.00 00:15:07.162 =================================================================================================================== 00:15:07.162 Total : 7232.00 28.25 0.00 0.00 0.00 0.00 0.00 00:15:07.162 00:15:07.162 true 00:15:07.162 06:32:59 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:07.162 06:32:59 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:07.421 06:33:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:07.421 06:33:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:07.421 06:33:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 83960 00:15:07.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.988 Nvme0n1 : 3.00 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:15:07.988 =================================================================================================================== 00:15:07.988 Total : 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:15:07.988 00:15:08.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.922 Nvme0n1 : 4.00 6892.00 26.92 0.00 0.00 0.00 0.00 0.00 00:15:08.922 =================================================================================================================== 00:15:08.922 Total : 6892.00 26.92 0.00 0.00 0.00 0.00 0.00 00:15:08.922 00:15:10.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.296 Nvme0n1 : 5.00 6833.80 26.69 0.00 0.00 0.00 0.00 0.00 00:15:10.296 =================================================================================================================== 00:15:10.296 Total : 6833.80 26.69 0.00 0.00 0.00 0.00 0.00 00:15:10.296 00:15:11.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.231 Nvme0n1 : 6.00 6797.67 26.55 0.00 0.00 0.00 0.00 0.00 00:15:11.231 =================================================================================================================== 00:15:11.231 Total : 6797.67 26.55 0.00 0.00 0.00 0.00 0.00 00:15:11.231 00:15:12.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.167 Nvme0n1 : 7.00 6773.71 26.46 0.00 0.00 0.00 0.00 0.00 00:15:12.167 =================================================================================================================== 00:15:12.167 Total : 6773.71 26.46 0.00 0.00 0.00 0.00 0.00 00:15:12.167 00:15:13.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.105 Nvme0n1 : 8.00 6750.12 26.37 0.00 0.00 0.00 0.00 0.00 00:15:13.105 =================================================================================================================== 00:15:13.105 Total : 6750.12 26.37 0.00 0.00 0.00 0.00 0.00 00:15:13.105 00:15:14.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.041 Nvme0n1 : 9.00 6734.89 26.31 0.00 0.00 0.00 0.00 0.00 00:15:14.041 =================================================================================================================== 00:15:14.041 Total : 6734.89 26.31 0.00 0.00 0.00 0.00 0.00 00:15:14.041 00:15:14.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.978 Nvme0n1 : 10.00 6724.80 26.27 0.00 0.00 0.00 0.00 0.00 00:15:14.978 =================================================================================================================== 00:15:14.978 Total : 6724.80 26.27 0.00 0.00 0.00 0.00 0.00 00:15:14.978 00:15:14.978 00:15:14.978 Latency(us) 00:15:14.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.978 Nvme0n1 : 10.02 6727.13 26.28 0.00 0.00 19015.66 4647.10 163959.16 00:15:14.978 =================================================================================================================== 00:15:14.978 Total : 6727.13 26.28 0.00 0.00 19015.66 4647.10 163959.16 00:15:14.978 0 00:15:14.978 06:33:07 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83912 00:15:14.978 06:33:07 -- common/autotest_common.sh@926 -- # '[' -z 83912 ']' 00:15:14.978 06:33:07 -- common/autotest_common.sh@930 -- # kill -0 83912 00:15:14.978 06:33:07 -- common/autotest_common.sh@931 -- # uname 00:15:14.978 06:33:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.978 06:33:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83912 00:15:14.978 06:33:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:14.978 06:33:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:14.978 killing process with pid 83912 00:15:14.978 06:33:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83912' 00:15:14.978 06:33:07 -- common/autotest_common.sh@945 -- # kill 83912 00:15:14.978 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.978 00:15:14.978 Latency(us) 00:15:14.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.978 =================================================================================================================== 00:15:14.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.978 06:33:07 -- common/autotest_common.sh@950 -- # wait 83912 00:15:15.237 06:33:07 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83314 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@74 -- # wait 83314 00:15:15.805 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83314 Killed "${NVMF_APP[@]}" "$@" 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:15.805 06:33:08 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:15.805 06:33:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:15.805 06:33:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:15.805 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:15:15.805 06:33:08 -- nvmf/common.sh@469 -- # nvmfpid=84111 00:15:15.805 06:33:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:15.805 06:33:08 -- nvmf/common.sh@470 -- # waitforlisten 84111 00:15:15.805 06:33:08 -- common/autotest_common.sh@819 -- # '[' -z 84111 ']' 00:15:15.805 06:33:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.805 06:33:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:15.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.805 06:33:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.805 06:33:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:15.805 06:33:08 -- common/autotest_common.sh@10 -- # set +x 00:15:16.064 [2024-10-04 06:33:08.535682] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:16.064 [2024-10-04 06:33:08.535795] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.064 [2024-10-04 06:33:08.676504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.323 [2024-10-04 06:33:08.752461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.323 [2024-10-04 06:33:08.752631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.323 [2024-10-04 06:33:08.752643] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.323 [2024-10-04 06:33:08.752650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.323 [2024-10-04 06:33:08.752674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.889 06:33:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:16.890 06:33:09 -- common/autotest_common.sh@852 -- # return 0 00:15:16.890 06:33:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:16.890 06:33:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:16.890 06:33:09 -- common/autotest_common.sh@10 -- # set +x 00:15:16.890 06:33:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.890 06:33:09 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:17.148 [2024-10-04 06:33:09.718065] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:17.148 [2024-10-04 06:33:09.718463] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:17.148 [2024-10-04 06:33:09.718709] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:17.148 06:33:09 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:17.148 06:33:09 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 8c44811e-2b59-43a5-80e3-3fa549b7e681 00:15:17.148 06:33:09 -- common/autotest_common.sh@887 -- # local bdev_name=8c44811e-2b59-43a5-80e3-3fa549b7e681 00:15:17.148 06:33:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:17.148 06:33:09 -- common/autotest_common.sh@889 -- # local i 00:15:17.148 06:33:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:17.148 06:33:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:17.148 06:33:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:17.409 06:33:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c44811e-2b59-43a5-80e3-3fa549b7e681 -t 2000 00:15:17.668 [ 00:15:17.668 { 00:15:17.668 "aliases": [ 00:15:17.668 "lvs/lvol" 00:15:17.668 ], 00:15:17.668 "assigned_rate_limits": { 00:15:17.668 "r_mbytes_per_sec": 0, 00:15:17.668 "rw_ios_per_sec": 0, 00:15:17.668 "rw_mbytes_per_sec": 0, 00:15:17.668 "w_mbytes_per_sec": 0 00:15:17.668 }, 00:15:17.668 "block_size": 4096, 00:15:17.668 "claimed": false, 00:15:17.668 "driver_specific": { 00:15:17.668 "lvol": { 00:15:17.668 "base_bdev": "aio_bdev", 00:15:17.668 "clone": false, 00:15:17.668 "esnap_clone": false, 00:15:17.668 "lvol_store_uuid": "052d38b4-a84f-4ab8-8e51-3be1bb5eb424", 00:15:17.668 "snapshot": false, 00:15:17.668 "thin_provision": false 00:15:17.668 } 00:15:17.668 }, 00:15:17.668 "name": "8c44811e-2b59-43a5-80e3-3fa549b7e681", 00:15:17.668 "num_blocks": 38912, 00:15:17.668 "product_name": "Logical Volume", 00:15:17.668 "supported_io_types": { 00:15:17.668 "abort": false, 00:15:17.668 "compare": false, 00:15:17.668 "compare_and_write": false, 00:15:17.668 "flush": false, 00:15:17.668 "nvme_admin": false, 00:15:17.668 "nvme_io": false, 00:15:17.668 "read": true, 00:15:17.668 "reset": true, 00:15:17.668 "unmap": true, 00:15:17.668 "write": true, 00:15:17.668 "write_zeroes": true 00:15:17.668 }, 00:15:17.668 "uuid": "8c44811e-2b59-43a5-80e3-3fa549b7e681", 00:15:17.668 "zoned": false 00:15:17.668 } 00:15:17.668 ] 00:15:17.668 06:33:10 -- common/autotest_common.sh@895 -- # return 0 00:15:17.668 06:33:10 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:17.668 06:33:10 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:17.928 06:33:10 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:17.928 06:33:10 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:17.928 06:33:10 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:18.187 06:33:10 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:18.187 06:33:10 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:18.446 [2024-10-04 06:33:10.943572] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:18.446 06:33:10 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:18.446 06:33:10 -- common/autotest_common.sh@640 -- # local es=0 00:15:18.446 06:33:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:18.446 06:33:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.446 06:33:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:18.446 06:33:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.446 06:33:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:18.446 06:33:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.446 06:33:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:18.446 06:33:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.446 06:33:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:18.446 06:33:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:18.705 2024/10/04 06:33:11 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:052d38b4-a84f-4ab8-8e51-3be1bb5eb424], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:18.705 request: 00:15:18.705 { 00:15:18.705 "method": "bdev_lvol_get_lvstores", 00:15:18.705 "params": { 00:15:18.705 "uuid": "052d38b4-a84f-4ab8-8e51-3be1bb5eb424" 00:15:18.705 } 00:15:18.705 } 00:15:18.705 Got JSON-RPC error response 00:15:18.705 GoRPCClient: error on JSON-RPC call 00:15:18.705 06:33:11 -- common/autotest_common.sh@643 -- # es=1 00:15:18.705 06:33:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:18.705 06:33:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:18.705 06:33:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:18.705 06:33:11 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:18.964 aio_bdev 00:15:18.964 06:33:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 8c44811e-2b59-43a5-80e3-3fa549b7e681 00:15:18.964 06:33:11 -- common/autotest_common.sh@887 -- # local bdev_name=8c44811e-2b59-43a5-80e3-3fa549b7e681 00:15:18.964 06:33:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:18.964 06:33:11 -- common/autotest_common.sh@889 -- # local i 00:15:18.964 06:33:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:18.964 06:33:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:18.964 06:33:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:19.224 06:33:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c44811e-2b59-43a5-80e3-3fa549b7e681 -t 2000 00:15:19.224 [ 00:15:19.224 { 00:15:19.224 "aliases": [ 00:15:19.224 "lvs/lvol" 00:15:19.224 ], 00:15:19.224 "assigned_rate_limits": { 00:15:19.224 "r_mbytes_per_sec": 0, 00:15:19.224 "rw_ios_per_sec": 0, 00:15:19.224 "rw_mbytes_per_sec": 0, 00:15:19.224 "w_mbytes_per_sec": 0 00:15:19.224 }, 00:15:19.224 "block_size": 4096, 00:15:19.224 "claimed": false, 00:15:19.224 "driver_specific": { 00:15:19.224 "lvol": { 00:15:19.224 "base_bdev": "aio_bdev", 00:15:19.224 "clone": false, 00:15:19.224 "esnap_clone": false, 00:15:19.224 "lvol_store_uuid": "052d38b4-a84f-4ab8-8e51-3be1bb5eb424", 00:15:19.224 "snapshot": false, 00:15:19.224 "thin_provision": false 00:15:19.224 } 00:15:19.224 }, 00:15:19.224 "name": "8c44811e-2b59-43a5-80e3-3fa549b7e681", 00:15:19.224 "num_blocks": 38912, 00:15:19.224 "product_name": "Logical Volume", 00:15:19.224 "supported_io_types": { 00:15:19.224 "abort": false, 00:15:19.224 "compare": false, 00:15:19.224 "compare_and_write": false, 00:15:19.224 "flush": false, 00:15:19.224 "nvme_admin": false, 00:15:19.224 "nvme_io": false, 00:15:19.224 "read": true, 00:15:19.224 "reset": true, 00:15:19.224 "unmap": true, 00:15:19.224 "write": true, 00:15:19.224 "write_zeroes": true 00:15:19.224 }, 00:15:19.224 "uuid": "8c44811e-2b59-43a5-80e3-3fa549b7e681", 00:15:19.224 "zoned": false 00:15:19.224 } 00:15:19.224 ] 00:15:19.224 06:33:11 -- common/autotest_common.sh@895 -- # return 0 00:15:19.224 06:33:11 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:19.224 06:33:11 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:19.484 06:33:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:19.484 06:33:12 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:19.484 06:33:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:19.743 06:33:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:19.743 06:33:12 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8c44811e-2b59-43a5-80e3-3fa549b7e681 00:15:20.002 06:33:12 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 052d38b4-a84f-4ab8-8e51-3be1bb5eb424 00:15:20.260 06:33:12 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:20.519 06:33:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:21.086 ************************************ 00:15:21.086 END TEST lvs_grow_dirty 00:15:21.086 ************************************ 00:15:21.086 00:15:21.086 real 0m20.043s 00:15:21.086 user 0m39.198s 00:15:21.086 sys 0m10.092s 00:15:21.086 06:33:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.086 06:33:13 -- common/autotest_common.sh@10 -- # set +x 00:15:21.086 06:33:13 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:21.086 06:33:13 -- common/autotest_common.sh@796 -- # type=--id 00:15:21.086 06:33:13 -- common/autotest_common.sh@797 -- # id=0 00:15:21.086 06:33:13 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:15:21.086 06:33:13 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:21.086 06:33:13 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:15:21.086 06:33:13 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:15:21.086 06:33:13 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:15:21.086 06:33:13 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:21.086 nvmf_trace.0 00:15:21.086 06:33:13 -- common/autotest_common.sh@811 -- # return 0 00:15:21.086 06:33:13 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:21.086 06:33:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:21.086 06:33:13 -- nvmf/common.sh@116 -- # sync 00:15:21.086 06:33:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:21.086 06:33:13 -- nvmf/common.sh@119 -- # set +e 00:15:21.086 06:33:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:21.086 06:33:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:21.086 rmmod nvme_tcp 00:15:21.086 rmmod nvme_fabrics 00:15:21.345 rmmod nvme_keyring 00:15:21.345 06:33:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:21.345 06:33:13 -- nvmf/common.sh@123 -- # set -e 00:15:21.345 06:33:13 -- nvmf/common.sh@124 -- # return 0 00:15:21.345 06:33:13 -- nvmf/common.sh@477 -- # '[' -n 84111 ']' 00:15:21.345 06:33:13 -- nvmf/common.sh@478 -- # killprocess 84111 00:15:21.345 06:33:13 -- common/autotest_common.sh@926 -- # '[' -z 84111 ']' 00:15:21.345 06:33:13 -- common/autotest_common.sh@930 -- # kill -0 84111 00:15:21.345 06:33:13 -- common/autotest_common.sh@931 -- # uname 00:15:21.345 06:33:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:21.345 06:33:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84111 00:15:21.345 killing process with pid 84111 00:15:21.345 06:33:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:21.345 06:33:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:21.345 06:33:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84111' 00:15:21.345 06:33:13 -- common/autotest_common.sh@945 -- # kill 84111 00:15:21.345 06:33:13 -- common/autotest_common.sh@950 -- # wait 84111 00:15:21.345 06:33:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:21.345 06:33:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:21.345 06:33:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:21.345 06:33:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.345 06:33:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:21.345 06:33:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.345 06:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.345 06:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.603 06:33:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:21.603 00:15:21.603 real 0m40.574s 00:15:21.603 user 1m2.692s 00:15:21.603 sys 0m12.938s 00:15:21.603 06:33:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.603 ************************************ 00:15:21.603 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:15:21.603 END TEST nvmf_lvs_grow 00:15:21.603 ************************************ 00:15:21.603 06:33:14 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:21.603 06:33:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:21.603 06:33:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:21.603 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:15:21.603 ************************************ 00:15:21.603 START TEST nvmf_bdev_io_wait 00:15:21.603 ************************************ 00:15:21.603 06:33:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:21.603 * Looking for test storage... 00:15:21.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.603 06:33:14 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.603 06:33:14 -- nvmf/common.sh@7 -- # uname -s 00:15:21.603 06:33:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.603 06:33:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.603 06:33:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.603 06:33:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.603 06:33:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.603 06:33:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.603 06:33:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.603 06:33:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.603 06:33:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.603 06:33:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.603 06:33:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:21.604 06:33:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:21.604 06:33:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.604 06:33:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.604 06:33:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.604 06:33:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.604 06:33:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.604 06:33:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.604 06:33:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.604 06:33:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.604 06:33:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.604 06:33:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.604 06:33:14 -- paths/export.sh@5 -- # export PATH 00:15:21.604 06:33:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.604 06:33:14 -- nvmf/common.sh@46 -- # : 0 00:15:21.604 06:33:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:21.604 06:33:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:21.604 06:33:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:21.604 06:33:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.604 06:33:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.604 06:33:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:21.604 06:33:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:21.604 06:33:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:21.604 06:33:14 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.604 06:33:14 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.604 06:33:14 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:21.604 06:33:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:21.604 06:33:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.604 06:33:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:21.604 06:33:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:21.604 06:33:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:21.604 06:33:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.604 06:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.604 06:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.604 06:33:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:21.604 06:33:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:21.604 06:33:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:21.604 06:33:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:21.604 06:33:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:21.604 06:33:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:21.604 06:33:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.604 06:33:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.604 06:33:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:21.604 06:33:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:21.604 06:33:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:21.604 06:33:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:21.604 06:33:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:21.604 06:33:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.604 06:33:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:21.604 06:33:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:21.604 06:33:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:21.604 06:33:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:21.604 06:33:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:21.604 06:33:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:21.604 Cannot find device "nvmf_tgt_br" 00:15:21.604 06:33:14 -- nvmf/common.sh@154 -- # true 00:15:21.604 06:33:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:21.604 Cannot find device "nvmf_tgt_br2" 00:15:21.604 06:33:14 -- nvmf/common.sh@155 -- # true 00:15:21.604 06:33:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:21.604 06:33:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:21.604 Cannot find device "nvmf_tgt_br" 00:15:21.604 06:33:14 -- nvmf/common.sh@157 -- # true 00:15:21.604 06:33:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:21.604 Cannot find device "nvmf_tgt_br2" 00:15:21.604 06:33:14 -- nvmf/common.sh@158 -- # true 00:15:21.604 06:33:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:21.864 06:33:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:21.864 06:33:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:21.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.864 06:33:14 -- nvmf/common.sh@161 -- # true 00:15:21.864 06:33:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:21.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:21.864 06:33:14 -- nvmf/common.sh@162 -- # true 00:15:21.864 06:33:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:21.864 06:33:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:21.864 06:33:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:21.864 06:33:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:21.864 06:33:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:21.864 06:33:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:21.864 06:33:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:21.864 06:33:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:21.864 06:33:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:21.864 06:33:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:21.864 06:33:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:21.864 06:33:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:21.864 06:33:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:21.864 06:33:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:21.864 06:33:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:21.864 06:33:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:21.864 06:33:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:21.864 06:33:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:21.864 06:33:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:21.864 06:33:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:21.864 06:33:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:21.864 06:33:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:21.864 06:33:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:21.864 06:33:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:21.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:21.864 00:15:21.864 --- 10.0.0.2 ping statistics --- 00:15:21.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.864 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:21.864 06:33:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:21.864 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:21.864 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:21.864 00:15:21.864 --- 10.0.0.3 ping statistics --- 00:15:21.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.864 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:21.864 06:33:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:21.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:21.864 00:15:21.864 --- 10.0.0.1 ping statistics --- 00:15:21.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.864 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:21.864 06:33:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.864 06:33:14 -- nvmf/common.sh@421 -- # return 0 00:15:21.864 06:33:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:21.864 06:33:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.864 06:33:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:21.864 06:33:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:21.864 06:33:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.864 06:33:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:21.864 06:33:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:21.864 06:33:14 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:21.864 06:33:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:21.864 06:33:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:21.864 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:15:22.130 06:33:14 -- nvmf/common.sh@469 -- # nvmfpid=84523 00:15:22.130 06:33:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:22.130 06:33:14 -- nvmf/common.sh@470 -- # waitforlisten 84523 00:15:22.130 06:33:14 -- common/autotest_common.sh@819 -- # '[' -z 84523 ']' 00:15:22.130 06:33:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.130 06:33:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:22.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.130 06:33:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.130 06:33:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:22.130 06:33:14 -- common/autotest_common.sh@10 -- # set +x 00:15:22.130 [2024-10-04 06:33:14.595810] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:22.130 [2024-10-04 06:33:14.595954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.130 [2024-10-04 06:33:14.734115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.404 [2024-10-04 06:33:14.809674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:22.404 [2024-10-04 06:33:14.809873] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.404 [2024-10-04 06:33:14.809890] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.404 [2024-10-04 06:33:14.809900] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.404 [2024-10-04 06:33:14.809979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.404 [2024-10-04 06:33:14.810062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.404 [2024-10-04 06:33:14.810718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.404 [2024-10-04 06:33:14.810745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.970 06:33:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:22.970 06:33:15 -- common/autotest_common.sh@852 -- # return 0 00:15:22.970 06:33:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:22.970 06:33:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:22.970 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:22.970 06:33:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.970 06:33:15 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:22.970 06:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.970 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:22.970 06:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.970 06:33:15 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:22.970 06:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.970 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:23.230 06:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.230 06:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.230 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:23.230 [2024-10-04 06:33:15.720981] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.230 06:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.230 06:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.230 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:23.230 Malloc0 00:15:23.230 06:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.230 06:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.230 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:23.230 06:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.230 06:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.230 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:23.230 06:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.230 06:33:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.230 06:33:15 -- common/autotest_common.sh@10 -- # set +x 00:15:23.230 [2024-10-04 06:33:15.776990] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.230 06:33:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84582 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:23.230 06:33:15 -- nvmf/common.sh@520 -- # config=() 00:15:23.230 06:33:15 -- target/bdev_io_wait.sh@30 -- # READ_PID=84584 00:15:23.230 06:33:15 -- nvmf/common.sh@520 -- # local subsystem config 00:15:23.230 06:33:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:23.230 06:33:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:23.230 { 00:15:23.230 "params": { 00:15:23.230 "name": "Nvme$subsystem", 00:15:23.230 "trtype": "$TEST_TRANSPORT", 00:15:23.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.230 "adrfam": "ipv4", 00:15:23.230 "trsvcid": "$NVMF_PORT", 00:15:23.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.230 "hdgst": ${hdgst:-false}, 00:15:23.230 "ddgst": ${ddgst:-false} 00:15:23.230 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 } 00:15:23.231 EOF 00:15:23.231 )") 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:23.231 06:33:15 -- nvmf/common.sh@520 -- # config=() 00:15:23.231 06:33:15 -- nvmf/common.sh@520 -- # local subsystem config 00:15:23.231 06:33:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:23.231 06:33:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:23.231 { 00:15:23.231 "params": { 00:15:23.231 "name": "Nvme$subsystem", 00:15:23.231 "trtype": "$TEST_TRANSPORT", 00:15:23.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.231 "adrfam": "ipv4", 00:15:23.231 "trsvcid": "$NVMF_PORT", 00:15:23.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.231 "hdgst": ${hdgst:-false}, 00:15:23.231 "ddgst": ${ddgst:-false} 00:15:23.231 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 } 00:15:23.231 EOF 00:15:23.231 )") 00:15:23.231 06:33:15 -- nvmf/common.sh@542 -- # cat 00:15:23.231 06:33:15 -- nvmf/common.sh@542 -- # cat 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:23.231 06:33:15 -- nvmf/common.sh@544 -- # jq . 00:15:23.231 06:33:15 -- nvmf/common.sh@544 -- # jq . 00:15:23.231 06:33:15 -- nvmf/common.sh@520 -- # config=() 00:15:23.231 06:33:15 -- nvmf/common.sh@520 -- # local subsystem config 00:15:23.231 06:33:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84586 00:15:23.231 06:33:15 -- nvmf/common.sh@545 -- # IFS=, 00:15:23.231 06:33:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:23.231 "params": { 00:15:23.231 "name": "Nvme1", 00:15:23.231 "trtype": "tcp", 00:15:23.231 "traddr": "10.0.0.2", 00:15:23.231 "adrfam": "ipv4", 00:15:23.231 "trsvcid": "4420", 00:15:23.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.231 "hdgst": false, 00:15:23.231 "ddgst": false 00:15:23.231 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 }' 00:15:23.231 06:33:15 -- nvmf/common.sh@545 -- # IFS=, 00:15:23.231 06:33:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:23.231 "params": { 00:15:23.231 "name": "Nvme1", 00:15:23.231 "trtype": "tcp", 00:15:23.231 "traddr": "10.0.0.2", 00:15:23.231 "adrfam": "ipv4", 00:15:23.231 "trsvcid": "4420", 00:15:23.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.231 "hdgst": false, 00:15:23.231 "ddgst": false 00:15:23.231 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 }' 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84596 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@35 -- # sync 00:15:23.231 06:33:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:23.231 { 00:15:23.231 "params": { 00:15:23.231 "name": "Nvme$subsystem", 00:15:23.231 "trtype": "$TEST_TRANSPORT", 00:15:23.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.231 "adrfam": "ipv4", 00:15:23.231 "trsvcid": "$NVMF_PORT", 00:15:23.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.231 "hdgst": ${hdgst:-false}, 00:15:23.231 "ddgst": ${ddgst:-false} 00:15:23.231 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 } 00:15:23.231 EOF 00:15:23.231 )") 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:23.231 06:33:15 -- nvmf/common.sh@520 -- # config=() 00:15:23.231 06:33:15 -- nvmf/common.sh@520 -- # local subsystem config 00:15:23.231 06:33:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:23.231 06:33:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:23.231 { 00:15:23.231 "params": { 00:15:23.231 "name": "Nvme$subsystem", 00:15:23.231 "trtype": "$TEST_TRANSPORT", 00:15:23.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.231 "adrfam": "ipv4", 00:15:23.231 "trsvcid": "$NVMF_PORT", 00:15:23.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.231 "hdgst": ${hdgst:-false}, 00:15:23.231 "ddgst": ${ddgst:-false} 00:15:23.231 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 } 00:15:23.231 EOF 00:15:23.231 )") 00:15:23.231 06:33:15 -- nvmf/common.sh@542 -- # cat 00:15:23.231 06:33:15 -- nvmf/common.sh@542 -- # cat 00:15:23.231 06:33:15 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:23.231 06:33:15 -- nvmf/common.sh@544 -- # jq . 00:15:23.231 06:33:15 -- nvmf/common.sh@544 -- # jq . 00:15:23.231 06:33:15 -- nvmf/common.sh@545 -- # IFS=, 00:15:23.231 06:33:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:23.231 "params": { 00:15:23.231 "name": "Nvme1", 00:15:23.231 "trtype": "tcp", 00:15:23.231 "traddr": "10.0.0.2", 00:15:23.231 "adrfam": "ipv4", 00:15:23.231 "trsvcid": "4420", 00:15:23.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.231 "hdgst": false, 00:15:23.231 "ddgst": false 00:15:23.231 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 }' 00:15:23.231 06:33:15 -- nvmf/common.sh@545 -- # IFS=, 00:15:23.231 06:33:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:23.231 "params": { 00:15:23.231 "name": "Nvme1", 00:15:23.231 "trtype": "tcp", 00:15:23.231 "traddr": "10.0.0.2", 00:15:23.231 "adrfam": "ipv4", 00:15:23.231 "trsvcid": "4420", 00:15:23.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.231 "hdgst": false, 00:15:23.231 "ddgst": false 00:15:23.231 }, 00:15:23.231 "method": "bdev_nvme_attach_controller" 00:15:23.231 }' 00:15:23.231 [2024-10-04 06:33:15.844478] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:23.231 [2024-10-04 06:33:15.844752] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:23.231 [2024-10-04 06:33:15.854131] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:23.231 [2024-10-04 06:33:15.854200] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:23.232 06:33:15 -- target/bdev_io_wait.sh@37 -- # wait 84582 00:15:23.232 [2024-10-04 06:33:15.859294] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:23.232 [2024-10-04 06:33:15.859386] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:23.232 [2024-10-04 06:33:15.865641] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:23.232 [2024-10-04 06:33:15.865729] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:23.491 [2024-10-04 06:33:16.051303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.491 [2024-10-04 06:33:16.126124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:23.491 [2024-10-04 06:33:16.129739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.750 [2024-10-04 06:33:16.205548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:23.750 [2024-10-04 06:33:16.211529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.750 [2024-10-04 06:33:16.283456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:23.750 [2024-10-04 06:33:16.287050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.750 Running I/O for 1 seconds... 00:15:23.750 Running I/O for 1 seconds... 00:15:23.750 [2024-10-04 06:33:16.365181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:23.750 Running I/O for 1 seconds... 00:15:24.008 Running I/O for 1 seconds... 00:15:24.946 00:15:24.946 Latency(us) 00:15:24.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.946 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:24.946 Nvme1n1 : 1.00 192085.54 750.33 0.00 0.00 663.81 269.96 1392.64 00:15:24.946 =================================================================================================================== 00:15:24.946 Total : 192085.54 750.33 0.00 0.00 663.81 269.96 1392.64 00:15:24.946 00:15:24.946 Latency(us) 00:15:24.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.946 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:24.946 Nvme1n1 : 1.01 9166.95 35.81 0.00 0.00 13905.94 7626.01 19899.11 00:15:24.946 =================================================================================================================== 00:15:24.946 Total : 9166.95 35.81 0.00 0.00 13905.94 7626.01 19899.11 00:15:24.946 00:15:24.946 Latency(us) 00:15:24.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.946 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:24.946 Nvme1n1 : 1.01 6152.67 24.03 0.00 0.00 20710.60 8162.21 34555.35 00:15:24.946 =================================================================================================================== 00:15:24.946 Total : 6152.67 24.03 0.00 0.00 20710.60 8162.21 34555.35 00:15:24.946 00:15:24.946 Latency(us) 00:15:24.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.946 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:24.946 Nvme1n1 : 1.01 6337.84 24.76 0.00 0.00 20094.58 7506.85 30146.56 00:15:24.946 =================================================================================================================== 00:15:24.946 Total : 6337.84 24.76 0.00 0.00 20094.58 7506.85 30146.56 00:15:25.206 06:33:17 -- target/bdev_io_wait.sh@38 -- # wait 84584 00:15:25.206 06:33:17 -- target/bdev_io_wait.sh@39 -- # wait 84586 00:15:25.206 06:33:17 -- target/bdev_io_wait.sh@40 -- # wait 84596 00:15:25.206 06:33:17 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.206 06:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.206 06:33:17 -- common/autotest_common.sh@10 -- # set +x 00:15:25.206 06:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.206 06:33:17 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:25.206 06:33:17 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:25.206 06:33:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:25.206 06:33:17 -- nvmf/common.sh@116 -- # sync 00:15:25.206 06:33:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:25.206 06:33:17 -- nvmf/common.sh@119 -- # set +e 00:15:25.206 06:33:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:25.206 06:33:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:25.206 rmmod nvme_tcp 00:15:25.206 rmmod nvme_fabrics 00:15:25.206 rmmod nvme_keyring 00:15:25.206 06:33:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.206 06:33:17 -- nvmf/common.sh@123 -- # set -e 00:15:25.206 06:33:17 -- nvmf/common.sh@124 -- # return 0 00:15:25.206 06:33:17 -- nvmf/common.sh@477 -- # '[' -n 84523 ']' 00:15:25.206 06:33:17 -- nvmf/common.sh@478 -- # killprocess 84523 00:15:25.206 06:33:17 -- common/autotest_common.sh@926 -- # '[' -z 84523 ']' 00:15:25.206 06:33:17 -- common/autotest_common.sh@930 -- # kill -0 84523 00:15:25.206 06:33:17 -- common/autotest_common.sh@931 -- # uname 00:15:25.206 06:33:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:25.206 06:33:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84523 00:15:25.465 killing process with pid 84523 00:15:25.465 06:33:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:25.465 06:33:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:25.465 06:33:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84523' 00:15:25.465 06:33:17 -- common/autotest_common.sh@945 -- # kill 84523 00:15:25.465 06:33:17 -- common/autotest_common.sh@950 -- # wait 84523 00:15:25.724 06:33:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.724 06:33:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.724 06:33:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.724 06:33:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.724 06:33:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.724 06:33:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.724 06:33:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.724 00:15:25.724 real 0m4.115s 00:15:25.724 user 0m18.142s 00:15:25.724 sys 0m1.948s 00:15:25.724 06:33:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.724 ************************************ 00:15:25.724 END TEST nvmf_bdev_io_wait 00:15:25.724 ************************************ 00:15:25.724 06:33:18 -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 06:33:18 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:25.724 06:33:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:25.724 06:33:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:25.724 06:33:18 -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 ************************************ 00:15:25.724 START TEST nvmf_queue_depth 00:15:25.724 ************************************ 00:15:25.724 06:33:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:25.724 * Looking for test storage... 00:15:25.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.724 06:33:18 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.724 06:33:18 -- nvmf/common.sh@7 -- # uname -s 00:15:25.724 06:33:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.724 06:33:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.724 06:33:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.724 06:33:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.724 06:33:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.724 06:33:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.724 06:33:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.724 06:33:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.724 06:33:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.724 06:33:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:25.724 06:33:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:25.724 06:33:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.724 06:33:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.724 06:33:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.724 06:33:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.724 06:33:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.724 06:33:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.724 06:33:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.724 06:33:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.724 06:33:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.724 06:33:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.724 06:33:18 -- paths/export.sh@5 -- # export PATH 00:15:25.724 06:33:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.724 06:33:18 -- nvmf/common.sh@46 -- # : 0 00:15:25.724 06:33:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.724 06:33:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.724 06:33:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.724 06:33:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.724 06:33:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.724 06:33:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.724 06:33:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.724 06:33:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.724 06:33:18 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:25.724 06:33:18 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:25.724 06:33:18 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.724 06:33:18 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:25.724 06:33:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.724 06:33:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.724 06:33:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.724 06:33:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.724 06:33:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.724 06:33:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.724 06:33:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.724 06:33:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.724 06:33:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:25.724 06:33:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:25.724 06:33:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.724 06:33:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.724 06:33:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.724 06:33:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.724 06:33:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.724 06:33:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.724 06:33:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.724 06:33:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.725 06:33:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.725 06:33:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.725 06:33:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.725 06:33:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.725 06:33:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.725 06:33:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.984 Cannot find device "nvmf_tgt_br" 00:15:25.984 06:33:18 -- nvmf/common.sh@154 -- # true 00:15:25.984 06:33:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.984 Cannot find device "nvmf_tgt_br2" 00:15:25.984 06:33:18 -- nvmf/common.sh@155 -- # true 00:15:25.984 06:33:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:25.984 06:33:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:25.984 Cannot find device "nvmf_tgt_br" 00:15:25.984 06:33:18 -- nvmf/common.sh@157 -- # true 00:15:25.984 06:33:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:25.984 Cannot find device "nvmf_tgt_br2" 00:15:25.984 06:33:18 -- nvmf/common.sh@158 -- # true 00:15:25.984 06:33:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:25.984 06:33:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:25.984 06:33:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.984 06:33:18 -- nvmf/common.sh@161 -- # true 00:15:25.984 06:33:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.984 06:33:18 -- nvmf/common.sh@162 -- # true 00:15:25.984 06:33:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.984 06:33:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.984 06:33:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.984 06:33:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.984 06:33:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.984 06:33:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.984 06:33:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.984 06:33:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.984 06:33:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.984 06:33:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:25.984 06:33:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:25.984 06:33:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:25.984 06:33:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:25.984 06:33:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.984 06:33:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.984 06:33:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.985 06:33:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:25.985 06:33:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:25.985 06:33:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.985 06:33:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.985 06:33:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.985 06:33:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.985 06:33:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.243 06:33:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:26.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:26.243 00:15:26.243 --- 10.0.0.2 ping statistics --- 00:15:26.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.243 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:26.243 06:33:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:26.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:26.243 00:15:26.243 --- 10.0.0.3 ping statistics --- 00:15:26.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.243 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:26.243 06:33:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:26.243 00:15:26.243 --- 10.0.0.1 ping statistics --- 00:15:26.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.243 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:26.243 06:33:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.243 06:33:18 -- nvmf/common.sh@421 -- # return 0 00:15:26.243 06:33:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:26.243 06:33:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.243 06:33:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:26.243 06:33:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:26.243 06:33:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.243 06:33:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:26.243 06:33:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:26.243 06:33:18 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:26.243 06:33:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:26.243 06:33:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:26.243 06:33:18 -- common/autotest_common.sh@10 -- # set +x 00:15:26.243 06:33:18 -- nvmf/common.sh@469 -- # nvmfpid=84813 00:15:26.243 06:33:18 -- nvmf/common.sh@470 -- # waitforlisten 84813 00:15:26.243 06:33:18 -- common/autotest_common.sh@819 -- # '[' -z 84813 ']' 00:15:26.243 06:33:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:26.243 06:33:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.243 06:33:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:26.243 06:33:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.243 06:33:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:26.243 06:33:18 -- common/autotest_common.sh@10 -- # set +x 00:15:26.243 [2024-10-04 06:33:18.753019] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:26.243 [2024-10-04 06:33:18.753105] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.243 [2024-10-04 06:33:18.891797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.501 [2024-10-04 06:33:18.961383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.501 [2024-10-04 06:33:18.961552] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.501 [2024-10-04 06:33:18.961565] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.501 [2024-10-04 06:33:18.961574] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.501 [2024-10-04 06:33:18.961601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.436 06:33:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:27.436 06:33:19 -- common/autotest_common.sh@852 -- # return 0 00:15:27.436 06:33:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.436 06:33:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:27.436 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 06:33:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.436 06:33:19 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.436 06:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.436 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 [2024-10-04 06:33:19.803994] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.436 06:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.436 06:33:19 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.436 06:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.436 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 Malloc0 00:15:27.436 06:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.436 06:33:19 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.436 06:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.436 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 06:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.436 06:33:19 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.436 06:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.436 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 06:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.436 06:33:19 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.436 06:33:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.436 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 [2024-10-04 06:33:19.871101] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.436 06:33:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.436 06:33:19 -- target/queue_depth.sh@30 -- # bdevperf_pid=84869 00:15:27.436 06:33:19 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:27.436 06:33:19 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:27.436 06:33:19 -- target/queue_depth.sh@33 -- # waitforlisten 84869 /var/tmp/bdevperf.sock 00:15:27.436 06:33:19 -- common/autotest_common.sh@819 -- # '[' -z 84869 ']' 00:15:27.436 06:33:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.436 06:33:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:27.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.436 06:33:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.436 06:33:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:27.436 06:33:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 [2024-10-04 06:33:19.925945] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:27.436 [2024-10-04 06:33:19.926041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84869 ] 00:15:27.436 [2024-10-04 06:33:20.065324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.695 [2024-10-04 06:33:20.157048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.631 06:33:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:28.631 06:33:20 -- common/autotest_common.sh@852 -- # return 0 00:15:28.631 06:33:20 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:28.631 06:33:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.631 06:33:20 -- common/autotest_common.sh@10 -- # set +x 00:15:28.631 NVMe0n1 00:15:28.631 06:33:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.631 06:33:21 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:28.631 Running I/O for 10 seconds... 00:15:38.608 00:15:38.608 Latency(us) 00:15:38.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.608 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:38.608 Verification LBA range: start 0x0 length 0x4000 00:15:38.608 NVMe0n1 : 10.05 17077.07 66.71 0.00 0.00 59775.47 11379.43 48139.17 00:15:38.608 =================================================================================================================== 00:15:38.608 Total : 17077.07 66.71 0.00 0.00 59775.47 11379.43 48139.17 00:15:38.608 0 00:15:38.608 06:33:31 -- target/queue_depth.sh@39 -- # killprocess 84869 00:15:38.608 06:33:31 -- common/autotest_common.sh@926 -- # '[' -z 84869 ']' 00:15:38.608 06:33:31 -- common/autotest_common.sh@930 -- # kill -0 84869 00:15:38.608 06:33:31 -- common/autotest_common.sh@931 -- # uname 00:15:38.608 06:33:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:38.608 06:33:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84869 00:15:38.608 06:33:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:38.608 killing process with pid 84869 00:15:38.608 06:33:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:38.608 06:33:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84869' 00:15:38.608 Received shutdown signal, test time was about 10.000000 seconds 00:15:38.608 00:15:38.608 Latency(us) 00:15:38.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.608 =================================================================================================================== 00:15:38.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.608 06:33:31 -- common/autotest_common.sh@945 -- # kill 84869 00:15:38.608 06:33:31 -- common/autotest_common.sh@950 -- # wait 84869 00:15:38.867 06:33:31 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:38.867 06:33:31 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:38.867 06:33:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:38.867 06:33:31 -- nvmf/common.sh@116 -- # sync 00:15:38.867 06:33:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:38.867 06:33:31 -- nvmf/common.sh@119 -- # set +e 00:15:38.867 06:33:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:38.867 06:33:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:38.867 rmmod nvme_tcp 00:15:38.867 rmmod nvme_fabrics 00:15:38.867 rmmod nvme_keyring 00:15:38.867 06:33:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:38.867 06:33:31 -- nvmf/common.sh@123 -- # set -e 00:15:38.867 06:33:31 -- nvmf/common.sh@124 -- # return 0 00:15:38.867 06:33:31 -- nvmf/common.sh@477 -- # '[' -n 84813 ']' 00:15:38.867 06:33:31 -- nvmf/common.sh@478 -- # killprocess 84813 00:15:38.867 06:33:31 -- common/autotest_common.sh@926 -- # '[' -z 84813 ']' 00:15:38.867 06:33:31 -- common/autotest_common.sh@930 -- # kill -0 84813 00:15:38.867 06:33:31 -- common/autotest_common.sh@931 -- # uname 00:15:38.867 06:33:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:38.867 06:33:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84813 00:15:39.126 killing process with pid 84813 00:15:39.126 06:33:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:39.126 06:33:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:39.126 06:33:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84813' 00:15:39.126 06:33:31 -- common/autotest_common.sh@945 -- # kill 84813 00:15:39.126 06:33:31 -- common/autotest_common.sh@950 -- # wait 84813 00:15:39.386 06:33:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.386 06:33:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:39.386 06:33:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:39.386 06:33:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.386 06:33:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:39.386 06:33:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.386 06:33:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.386 06:33:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.386 06:33:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:39.386 ************************************ 00:15:39.386 END TEST nvmf_queue_depth 00:15:39.386 ************************************ 00:15:39.386 00:15:39.386 real 0m13.614s 00:15:39.386 user 0m22.673s 00:15:39.386 sys 0m2.546s 00:15:39.386 06:33:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.386 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:15:39.386 06:33:31 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:39.386 06:33:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:39.386 06:33:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:39.386 06:33:31 -- common/autotest_common.sh@10 -- # set +x 00:15:39.386 ************************************ 00:15:39.386 START TEST nvmf_multipath 00:15:39.386 ************************************ 00:15:39.386 06:33:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:39.386 * Looking for test storage... 00:15:39.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:39.386 06:33:32 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.386 06:33:32 -- nvmf/common.sh@7 -- # uname -s 00:15:39.386 06:33:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.386 06:33:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.386 06:33:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.386 06:33:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.386 06:33:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.386 06:33:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.386 06:33:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.386 06:33:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.386 06:33:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.386 06:33:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.386 06:33:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:39.386 06:33:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:39.386 06:33:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.386 06:33:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.386 06:33:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.386 06:33:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.386 06:33:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.386 06:33:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.386 06:33:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.386 06:33:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.386 06:33:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.386 06:33:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.386 06:33:32 -- paths/export.sh@5 -- # export PATH 00:15:39.386 06:33:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.386 06:33:32 -- nvmf/common.sh@46 -- # : 0 00:15:39.386 06:33:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.386 06:33:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.386 06:33:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.386 06:33:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.386 06:33:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.386 06:33:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.386 06:33:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.386 06:33:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.386 06:33:32 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.386 06:33:32 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.386 06:33:32 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:39.386 06:33:32 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.386 06:33:32 -- target/multipath.sh@43 -- # nvmftestinit 00:15:39.386 06:33:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.386 06:33:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.386 06:33:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.386 06:33:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.386 06:33:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.386 06:33:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.386 06:33:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.386 06:33:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.386 06:33:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:39.386 06:33:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:39.386 06:33:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:39.386 06:33:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:39.386 06:33:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:39.386 06:33:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:39.386 06:33:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.386 06:33:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.386 06:33:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.386 06:33:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:39.386 06:33:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.386 06:33:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.386 06:33:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.386 06:33:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.386 06:33:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.386 06:33:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.386 06:33:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.386 06:33:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.386 06:33:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:39.386 06:33:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:39.645 Cannot find device "nvmf_tgt_br" 00:15:39.645 06:33:32 -- nvmf/common.sh@154 -- # true 00:15:39.645 06:33:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.646 Cannot find device "nvmf_tgt_br2" 00:15:39.646 06:33:32 -- nvmf/common.sh@155 -- # true 00:15:39.646 06:33:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:39.646 06:33:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:39.646 Cannot find device "nvmf_tgt_br" 00:15:39.646 06:33:32 -- nvmf/common.sh@157 -- # true 00:15:39.646 06:33:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:39.646 Cannot find device "nvmf_tgt_br2" 00:15:39.646 06:33:32 -- nvmf/common.sh@158 -- # true 00:15:39.646 06:33:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:39.646 06:33:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:39.646 06:33:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.646 06:33:32 -- nvmf/common.sh@161 -- # true 00:15:39.646 06:33:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.646 06:33:32 -- nvmf/common.sh@162 -- # true 00:15:39.646 06:33:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.646 06:33:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.646 06:33:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.646 06:33:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.646 06:33:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.646 06:33:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.646 06:33:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.646 06:33:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.646 06:33:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.646 06:33:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:39.646 06:33:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:39.646 06:33:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:39.646 06:33:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:39.646 06:33:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.646 06:33:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.646 06:33:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.646 06:33:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:39.646 06:33:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:39.646 06:33:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.646 06:33:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.646 06:33:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.916 06:33:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.916 06:33:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.916 06:33:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:39.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:15:39.916 00:15:39.916 --- 10.0.0.2 ping statistics --- 00:15:39.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.916 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:39.916 06:33:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:39.916 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.916 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:39.916 00:15:39.916 --- 10.0.0.3 ping statistics --- 00:15:39.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.916 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:39.916 06:33:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:39.916 00:15:39.916 --- 10.0.0.1 ping statistics --- 00:15:39.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.916 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:39.916 06:33:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.916 06:33:32 -- nvmf/common.sh@421 -- # return 0 00:15:39.916 06:33:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:39.916 06:33:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.916 06:33:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:39.916 06:33:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:39.916 06:33:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.916 06:33:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:39.916 06:33:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:39.916 06:33:32 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:39.916 06:33:32 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:39.916 06:33:32 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:39.916 06:33:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:39.916 06:33:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:39.916 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:15:39.916 06:33:32 -- nvmf/common.sh@469 -- # nvmfpid=85204 00:15:39.917 06:33:32 -- nvmf/common.sh@470 -- # waitforlisten 85204 00:15:39.917 06:33:32 -- common/autotest_common.sh@819 -- # '[' -z 85204 ']' 00:15:39.917 06:33:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.917 06:33:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:39.917 06:33:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.917 06:33:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.917 06:33:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:39.917 06:33:32 -- common/autotest_common.sh@10 -- # set +x 00:15:39.917 [2024-10-04 06:33:32.431404] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:15:39.917 [2024-10-04 06:33:32.431514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.917 [2024-10-04 06:33:32.574650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.189 [2024-10-04 06:33:32.662634] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.190 [2024-10-04 06:33:32.663121] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.190 [2024-10-04 06:33:32.663282] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.190 [2024-10-04 06:33:32.663519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.190 [2024-10-04 06:33:32.663796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.190 [2024-10-04 06:33:32.663899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.190 [2024-10-04 06:33:32.664116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.190 [2024-10-04 06:33:32.663988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.756 06:33:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:40.756 06:33:33 -- common/autotest_common.sh@852 -- # return 0 00:15:40.756 06:33:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:40.756 06:33:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:40.756 06:33:33 -- common/autotest_common.sh@10 -- # set +x 00:15:41.014 06:33:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.014 06:33:33 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:41.014 [2024-10-04 06:33:33.666301] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.272 06:33:33 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:41.530 Malloc0 00:15:41.530 06:33:33 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:41.788 06:33:34 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.788 06:33:34 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.046 [2024-10-04 06:33:34.648017] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.046 06:33:34 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:42.304 [2024-10-04 06:33:34.920456] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:42.304 06:33:34 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:42.561 06:33:35 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:42.820 06:33:35 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:42.820 06:33:35 -- common/autotest_common.sh@1177 -- # local i=0 00:15:42.820 06:33:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.820 06:33:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:42.820 06:33:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:44.722 06:33:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:44.723 06:33:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:44.723 06:33:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.723 06:33:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:44.723 06:33:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.723 06:33:37 -- common/autotest_common.sh@1187 -- # return 0 00:15:44.723 06:33:37 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:44.723 06:33:37 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:44.981 06:33:37 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:44.981 06:33:37 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:44.981 06:33:37 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:44.981 06:33:37 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:44.981 06:33:37 -- target/multipath.sh@38 -- # return 0 00:15:44.981 06:33:37 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:44.981 06:33:37 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:44.981 06:33:37 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:44.981 06:33:37 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:44.981 06:33:37 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:44.981 06:33:37 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:44.981 06:33:37 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:44.981 06:33:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:44.981 06:33:37 -- target/multipath.sh@22 -- # local timeout=20 00:15:44.981 06:33:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:44.981 06:33:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:44.981 06:33:37 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:44.981 06:33:37 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:44.981 06:33:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:44.981 06:33:37 -- target/multipath.sh@22 -- # local timeout=20 00:15:44.981 06:33:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:44.981 06:33:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:44.981 06:33:37 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:44.981 06:33:37 -- target/multipath.sh@85 -- # echo numa 00:15:44.981 06:33:37 -- target/multipath.sh@88 -- # fio_pid=85347 00:15:44.981 06:33:37 -- target/multipath.sh@90 -- # sleep 1 00:15:44.981 06:33:37 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:44.981 [global] 00:15:44.981 thread=1 00:15:44.981 invalidate=1 00:15:44.981 rw=randrw 00:15:44.981 time_based=1 00:15:44.981 runtime=6 00:15:44.981 ioengine=libaio 00:15:44.981 direct=1 00:15:44.981 bs=4096 00:15:44.981 iodepth=128 00:15:44.981 norandommap=0 00:15:44.981 numjobs=1 00:15:44.981 00:15:44.981 verify_dump=1 00:15:44.981 verify_backlog=512 00:15:44.981 verify_state_save=0 00:15:44.981 do_verify=1 00:15:44.981 verify=crc32c-intel 00:15:44.981 [job0] 00:15:44.981 filename=/dev/nvme0n1 00:15:44.981 Could not set queue depth (nvme0n1) 00:15:44.981 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:44.981 fio-3.35 00:15:44.981 Starting 1 thread 00:15:45.916 06:33:38 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:46.175 06:33:38 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:46.434 06:33:39 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:46.434 06:33:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:46.434 06:33:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.434 06:33:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:46.434 06:33:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:46.434 06:33:39 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:46.434 06:33:39 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:46.434 06:33:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:46.434 06:33:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.434 06:33:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:46.434 06:33:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:46.434 06:33:39 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:46.434 06:33:39 -- target/multipath.sh@25 -- # sleep 1s 00:15:47.810 06:33:40 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:47.810 06:33:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:47.810 06:33:40 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:47.810 06:33:40 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:47.810 06:33:40 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:48.068 06:33:40 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:48.068 06:33:40 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:48.068 06:33:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:48.068 06:33:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:48.068 06:33:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:48.068 06:33:40 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:48.068 06:33:40 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:48.068 06:33:40 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:48.068 06:33:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:48.068 06:33:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:48.068 06:33:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:48.068 06:33:40 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:48.068 06:33:40 -- target/multipath.sh@25 -- # sleep 1s 00:15:49.004 06:33:41 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:49.004 06:33:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:49.004 06:33:41 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:49.004 06:33:41 -- target/multipath.sh@104 -- # wait 85347 00:15:51.537 00:15:51.537 job0: (groupid=0, jobs=1): err= 0: pid=85368: Fri Oct 4 06:33:43 2024 00:15:51.537 read: IOPS=12.4k, BW=48.4MiB/s (50.8MB/s)(291MiB/6003msec) 00:15:51.537 slat (usec): min=4, max=6754, avg=45.86, stdev=211.62 00:15:51.537 clat (usec): min=891, max=13758, avg=7082.46, stdev=1151.88 00:15:51.537 lat (usec): min=1009, max=13784, avg=7128.31, stdev=1158.81 00:15:51.538 clat percentiles (usec): 00:15:51.538 | 1.00th=[ 4178], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6194], 00:15:51.538 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7308], 00:15:51.538 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[ 8979], 00:15:51.538 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11600], 99.95th=[11863], 00:15:51.538 | 99.99th=[12780] 00:15:51.538 bw ( KiB/s): min=12512, max=33664, per=52.12%, avg=25857.45, stdev=7642.59, samples=11 00:15:51.538 iops : min= 3128, max= 8416, avg=6464.36, stdev=1910.65, samples=11 00:15:51.538 write: IOPS=7266, BW=28.4MiB/s (29.8MB/s)(149MiB/5247msec); 0 zone resets 00:15:51.538 slat (usec): min=11, max=2797, avg=58.28, stdev=144.00 00:15:51.538 clat (usec): min=721, max=12066, avg=6139.49, stdev=935.17 00:15:51.538 lat (usec): min=896, max=12098, avg=6197.76, stdev=938.58 00:15:51.538 clat percentiles (usec): 00:15:51.538 | 1.00th=[ 3425], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 5538], 00:15:51.538 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6325], 00:15:51.538 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7111], 95.00th=[ 7439], 00:15:51.538 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[10421], 99.95th=[10683], 00:15:51.538 | 99.99th=[11076] 00:15:51.538 bw ( KiB/s): min=12640, max=33392, per=88.83%, avg=25819.64, stdev=7349.22, samples=11 00:15:51.538 iops : min= 3160, max= 8348, avg=6454.91, stdev=1837.31, samples=11 00:15:51.538 lat (usec) : 750=0.01%, 1000=0.01% 00:15:51.538 lat (msec) : 2=0.03%, 4=1.46%, 10=97.20%, 20=1.31% 00:15:51.538 cpu : usr=5.93%, sys=23.46%, ctx=6835, majf=0, minf=151 00:15:51.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:51.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.538 issued rwts: total=74447,38126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.538 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.538 00:15:51.538 Run status group 0 (all jobs): 00:15:51.538 READ: bw=48.4MiB/s (50.8MB/s), 48.4MiB/s-48.4MiB/s (50.8MB/s-50.8MB/s), io=291MiB (305MB), run=6003-6003msec 00:15:51.538 WRITE: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=149MiB (156MB), run=5247-5247msec 00:15:51.538 00:15:51.538 Disk stats (read/write): 00:15:51.538 nvme0n1: ios=72610/38126, merge=0/0, ticks=480969/218454, in_queue=699423, util=98.65% 00:15:51.538 06:33:43 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:51.538 06:33:44 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:51.796 06:33:44 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:51.796 06:33:44 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:51.796 06:33:44 -- target/multipath.sh@22 -- # local timeout=20 00:15:51.796 06:33:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:51.796 06:33:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:51.796 06:33:44 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:51.796 06:33:44 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:51.796 06:33:44 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:51.796 06:33:44 -- target/multipath.sh@22 -- # local timeout=20 00:15:51.796 06:33:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:51.796 06:33:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:51.796 06:33:44 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:51.796 06:33:44 -- target/multipath.sh@25 -- # sleep 1s 00:15:52.732 06:33:45 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:52.732 06:33:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:52.732 06:33:45 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:52.732 06:33:45 -- target/multipath.sh@113 -- # echo round-robin 00:15:52.732 06:33:45 -- target/multipath.sh@116 -- # fio_pid=85498 00:15:52.732 06:33:45 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:52.732 06:33:45 -- target/multipath.sh@118 -- # sleep 1 00:15:52.732 [global] 00:15:52.732 thread=1 00:15:52.732 invalidate=1 00:15:52.732 rw=randrw 00:15:52.732 time_based=1 00:15:52.732 runtime=6 00:15:52.732 ioengine=libaio 00:15:52.732 direct=1 00:15:52.732 bs=4096 00:15:52.732 iodepth=128 00:15:52.732 norandommap=0 00:15:52.732 numjobs=1 00:15:52.732 00:15:52.732 verify_dump=1 00:15:52.732 verify_backlog=512 00:15:52.732 verify_state_save=0 00:15:52.732 do_verify=1 00:15:52.732 verify=crc32c-intel 00:15:52.732 [job0] 00:15:52.732 filename=/dev/nvme0n1 00:15:52.732 Could not set queue depth (nvme0n1) 00:15:52.991 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:52.991 fio-3.35 00:15:52.991 Starting 1 thread 00:15:53.927 06:33:46 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:53.927 06:33:46 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:54.187 06:33:46 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:54.187 06:33:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:54.187 06:33:46 -- target/multipath.sh@22 -- # local timeout=20 00:15:54.187 06:33:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:54.187 06:33:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:54.187 06:33:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:54.187 06:33:46 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:54.187 06:33:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:54.187 06:33:46 -- target/multipath.sh@22 -- # local timeout=20 00:15:54.187 06:33:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:54.187 06:33:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:54.187 06:33:46 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:54.187 06:33:46 -- target/multipath.sh@25 -- # sleep 1s 00:15:55.566 06:33:47 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:55.566 06:33:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:55.566 06:33:47 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:55.566 06:33:47 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:55.566 06:33:48 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:55.838 06:33:48 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:55.838 06:33:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:55.838 06:33:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:55.838 06:33:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:55.838 06:33:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:55.839 06:33:48 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:55.839 06:33:48 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:55.839 06:33:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:55.839 06:33:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:55.839 06:33:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:55.839 06:33:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:55.839 06:33:48 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:55.839 06:33:48 -- target/multipath.sh@25 -- # sleep 1s 00:15:56.851 06:33:49 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:56.851 06:33:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:56.851 06:33:49 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:56.851 06:33:49 -- target/multipath.sh@132 -- # wait 85498 00:15:59.380 00:15:59.380 job0: (groupid=0, jobs=1): err= 0: pid=85525: Fri Oct 4 06:33:51 2024 00:15:59.380 read: IOPS=12.7k, BW=49.7MiB/s (52.1MB/s)(298MiB/6003msec) 00:15:59.380 slat (usec): min=7, max=5439, avg=38.97, stdev=180.61 00:15:59.380 clat (usec): min=407, max=17262, avg=6975.58, stdev=1802.23 00:15:59.380 lat (usec): min=418, max=17272, avg=7014.55, stdev=1806.29 00:15:59.380 clat percentiles (usec): 00:15:59.380 | 1.00th=[ 2311], 5.00th=[ 3720], 10.00th=[ 5080], 20.00th=[ 6063], 00:15:59.380 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7177], 00:15:59.380 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8979], 95.00th=[10159], 00:15:59.380 | 99.00th=[12387], 99.50th=[13435], 99.90th=[15401], 99.95th=[15926], 00:15:59.380 | 99.99th=[17171] 00:15:59.380 bw ( KiB/s): min=13520, max=32768, per=52.80%, avg=26854.64, stdev=6451.29, samples=11 00:15:59.380 iops : min= 3380, max= 8192, avg=6713.64, stdev=1612.81, samples=11 00:15:59.380 write: IOPS=7364, BW=28.8MiB/s (30.2MB/s)(151MiB/5246msec); 0 zone resets 00:15:59.380 slat (usec): min=14, max=3219, avg=50.78, stdev=121.14 00:15:59.380 clat (usec): min=960, max=14993, avg=5937.78, stdev=1491.21 00:15:59.380 lat (usec): min=985, max=15019, avg=5988.55, stdev=1493.85 00:15:59.380 clat percentiles (usec): 00:15:59.380 | 1.00th=[ 2147], 5.00th=[ 2999], 10.00th=[ 3687], 20.00th=[ 5211], 00:15:59.380 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 6063], 60.00th=[ 6325], 00:15:59.380 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7373], 95.00th=[ 8291], 00:15:59.380 | 99.00th=[10028], 99.50th=[10683], 99.90th=[12387], 99.95th=[13435], 00:15:59.380 | 99.99th=[14484] 00:15:59.380 bw ( KiB/s): min=14224, max=31896, per=90.99%, avg=26804.73, stdev=6071.75, samples=11 00:15:59.381 iops : min= 3556, max= 7974, avg=6701.18, stdev=1517.94, samples=11 00:15:59.381 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.07% 00:15:59.381 lat (msec) : 2=0.59%, 4=7.27%, 10=87.94%, 20=4.10% 00:15:59.381 cpu : usr=6.26%, sys=24.69%, ctx=7204, majf=0, minf=90 00:15:59.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:59.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:59.381 issued rwts: total=76331,38633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:59.381 00:15:59.381 Run status group 0 (all jobs): 00:15:59.381 READ: bw=49.7MiB/s (52.1MB/s), 49.7MiB/s-49.7MiB/s (52.1MB/s-52.1MB/s), io=298MiB (313MB), run=6003-6003msec 00:15:59.381 WRITE: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=151MiB (158MB), run=5246-5246msec 00:15:59.381 00:15:59.381 Disk stats (read/write): 00:15:59.381 nvme0n1: ios=75507/37616, merge=0/0, ticks=492757/208069, in_queue=700826, util=98.63% 00:15:59.381 06:33:51 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:59.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:59.381 06:33:51 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:59.381 06:33:51 -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.381 06:33:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:59.381 06:33:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.381 06:33:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:59.381 06:33:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:59.381 06:33:51 -- common/autotest_common.sh@1210 -- # return 0 00:15:59.381 06:33:51 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.381 06:33:51 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:59.381 06:33:51 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:59.381 06:33:51 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:59.381 06:33:51 -- target/multipath.sh@144 -- # nvmftestfini 00:15:59.381 06:33:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:59.381 06:33:51 -- nvmf/common.sh@116 -- # sync 00:15:59.381 06:33:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:59.381 06:33:52 -- nvmf/common.sh@119 -- # set +e 00:15:59.381 06:33:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:59.381 06:33:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:59.381 rmmod nvme_tcp 00:15:59.639 rmmod nvme_fabrics 00:15:59.639 rmmod nvme_keyring 00:15:59.639 06:33:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:59.639 06:33:52 -- nvmf/common.sh@123 -- # set -e 00:15:59.639 06:33:52 -- nvmf/common.sh@124 -- # return 0 00:15:59.639 06:33:52 -- nvmf/common.sh@477 -- # '[' -n 85204 ']' 00:15:59.639 06:33:52 -- nvmf/common.sh@478 -- # killprocess 85204 00:15:59.639 06:33:52 -- common/autotest_common.sh@926 -- # '[' -z 85204 ']' 00:15:59.639 06:33:52 -- common/autotest_common.sh@930 -- # kill -0 85204 00:15:59.639 06:33:52 -- common/autotest_common.sh@931 -- # uname 00:15:59.639 06:33:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:59.639 06:33:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85204 00:15:59.639 killing process with pid 85204 00:15:59.639 06:33:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:59.639 06:33:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:59.639 06:33:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85204' 00:15:59.639 06:33:52 -- common/autotest_common.sh@945 -- # kill 85204 00:15:59.639 06:33:52 -- common/autotest_common.sh@950 -- # wait 85204 00:15:59.898 06:33:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:59.898 06:33:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:59.898 06:33:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:59.898 06:33:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.898 06:33:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:59.898 06:33:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.898 06:33:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.898 06:33:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.898 06:33:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:59.898 00:15:59.898 real 0m20.493s 00:15:59.898 user 1m20.448s 00:15:59.898 sys 0m6.448s 00:15:59.898 06:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.898 06:33:52 -- common/autotest_common.sh@10 -- # set +x 00:15:59.898 ************************************ 00:15:59.898 END TEST nvmf_multipath 00:15:59.898 ************************************ 00:15:59.898 06:33:52 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:59.898 06:33:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:59.898 06:33:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:59.898 06:33:52 -- common/autotest_common.sh@10 -- # set +x 00:15:59.898 ************************************ 00:15:59.898 START TEST nvmf_zcopy 00:15:59.898 ************************************ 00:15:59.898 06:33:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:59.898 * Looking for test storage... 00:15:59.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:59.898 06:33:52 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.898 06:33:52 -- nvmf/common.sh@7 -- # uname -s 00:15:59.898 06:33:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.898 06:33:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.898 06:33:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.898 06:33:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.898 06:33:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.898 06:33:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.898 06:33:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.898 06:33:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.898 06:33:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.898 06:33:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.898 06:33:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:59.898 06:33:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:15:59.898 06:33:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.898 06:33:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.898 06:33:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.898 06:33:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.898 06:33:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.898 06:33:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.898 06:33:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.898 06:33:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.898 06:33:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.899 06:33:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.899 06:33:52 -- paths/export.sh@5 -- # export PATH 00:15:59.899 06:33:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.899 06:33:52 -- nvmf/common.sh@46 -- # : 0 00:15:59.899 06:33:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:59.899 06:33:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:59.899 06:33:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:59.899 06:33:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.899 06:33:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.899 06:33:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:59.899 06:33:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:59.899 06:33:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:59.899 06:33:52 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:59.899 06:33:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:59.899 06:33:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.899 06:33:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:59.899 06:33:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:59.899 06:33:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:59.899 06:33:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.899 06:33:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.899 06:33:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.899 06:33:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:59.899 06:33:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:59.899 06:33:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:59.899 06:33:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:59.899 06:33:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:59.899 06:33:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:59.899 06:33:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.899 06:33:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.899 06:33:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:59.899 06:33:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:59.899 06:33:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.899 06:33:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.899 06:33:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.899 06:33:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.899 06:33:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.899 06:33:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.899 06:33:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.899 06:33:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.899 06:33:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:00.157 06:33:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:00.157 Cannot find device "nvmf_tgt_br" 00:16:00.157 06:33:52 -- nvmf/common.sh@154 -- # true 00:16:00.157 06:33:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.157 Cannot find device "nvmf_tgt_br2" 00:16:00.157 06:33:52 -- nvmf/common.sh@155 -- # true 00:16:00.157 06:33:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:00.157 06:33:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:00.157 Cannot find device "nvmf_tgt_br" 00:16:00.157 06:33:52 -- nvmf/common.sh@157 -- # true 00:16:00.157 06:33:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:00.157 Cannot find device "nvmf_tgt_br2" 00:16:00.157 06:33:52 -- nvmf/common.sh@158 -- # true 00:16:00.157 06:33:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:00.157 06:33:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:00.157 06:33:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.157 06:33:52 -- nvmf/common.sh@161 -- # true 00:16:00.157 06:33:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.157 06:33:52 -- nvmf/common.sh@162 -- # true 00:16:00.157 06:33:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.157 06:33:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.157 06:33:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.157 06:33:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.157 06:33:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.157 06:33:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.157 06:33:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.157 06:33:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:00.157 06:33:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:00.415 06:33:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:00.415 06:33:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:00.415 06:33:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:00.415 06:33:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:00.415 06:33:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.416 06:33:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.416 06:33:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.416 06:33:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:00.416 06:33:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:00.416 06:33:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.416 06:33:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:00.416 06:33:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:00.416 06:33:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:00.416 06:33:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:00.416 06:33:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:00.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:00.416 00:16:00.416 --- 10.0.0.2 ping statistics --- 00:16:00.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.416 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:00.416 06:33:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:00.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:00.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:16:00.416 00:16:00.416 --- 10.0.0.3 ping statistics --- 00:16:00.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.416 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:00.416 06:33:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:00.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:16:00.416 00:16:00.416 --- 10.0.0.1 ping statistics --- 00:16:00.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.416 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:16:00.416 06:33:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.416 06:33:52 -- nvmf/common.sh@421 -- # return 0 00:16:00.416 06:33:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:00.416 06:33:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.416 06:33:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:00.416 06:33:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:00.416 06:33:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.416 06:33:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:00.416 06:33:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:00.416 06:33:52 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:00.416 06:33:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:00.416 06:33:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:00.416 06:33:52 -- common/autotest_common.sh@10 -- # set +x 00:16:00.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.416 06:33:52 -- nvmf/common.sh@469 -- # nvmfpid=85796 00:16:00.416 06:33:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:00.416 06:33:52 -- nvmf/common.sh@470 -- # waitforlisten 85796 00:16:00.416 06:33:52 -- common/autotest_common.sh@819 -- # '[' -z 85796 ']' 00:16:00.416 06:33:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.416 06:33:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:00.416 06:33:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.416 06:33:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:00.416 06:33:52 -- common/autotest_common.sh@10 -- # set +x 00:16:00.416 [2024-10-04 06:33:53.010249] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:00.416 [2024-10-04 06:33:53.010338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.675 [2024-10-04 06:33:53.146945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.675 [2024-10-04 06:33:53.214787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:00.675 [2024-10-04 06:33:53.215284] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.675 [2024-10-04 06:33:53.215321] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.675 [2024-10-04 06:33:53.215331] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.675 [2024-10-04 06:33:53.215366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.610 06:33:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:01.610 06:33:54 -- common/autotest_common.sh@852 -- # return 0 00:16:01.610 06:33:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:01.610 06:33:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:01.610 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 06:33:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.610 06:33:54 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:01.610 06:33:54 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:01.610 06:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.610 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 [2024-10-04 06:33:54.090765] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.610 06:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.610 06:33:54 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:01.610 06:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.610 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 06:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.610 06:33:54 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.610 06:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.610 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 [2024-10-04 06:33:54.106921] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.610 06:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.610 06:33:54 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:01.610 06:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.610 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 06:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.610 06:33:54 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:01.610 06:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.610 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 malloc0 00:16:01.610 06:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.610 06:33:54 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:01.610 06:33:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:01.610 06:33:54 -- common/autotest_common.sh@10 -- # set +x 00:16:01.610 06:33:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:01.610 06:33:54 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:01.610 06:33:54 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:01.610 06:33:54 -- nvmf/common.sh@520 -- # config=() 00:16:01.610 06:33:54 -- nvmf/common.sh@520 -- # local subsystem config 00:16:01.610 06:33:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:01.610 06:33:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:01.610 { 00:16:01.610 "params": { 00:16:01.610 "name": "Nvme$subsystem", 00:16:01.610 "trtype": "$TEST_TRANSPORT", 00:16:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:01.610 "adrfam": "ipv4", 00:16:01.610 "trsvcid": "$NVMF_PORT", 00:16:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:01.610 "hdgst": ${hdgst:-false}, 00:16:01.610 "ddgst": ${ddgst:-false} 00:16:01.610 }, 00:16:01.610 "method": "bdev_nvme_attach_controller" 00:16:01.610 } 00:16:01.610 EOF 00:16:01.610 )") 00:16:01.610 06:33:54 -- nvmf/common.sh@542 -- # cat 00:16:01.610 06:33:54 -- nvmf/common.sh@544 -- # jq . 00:16:01.610 06:33:54 -- nvmf/common.sh@545 -- # IFS=, 00:16:01.610 06:33:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:01.610 "params": { 00:16:01.610 "name": "Nvme1", 00:16:01.610 "trtype": "tcp", 00:16:01.610 "traddr": "10.0.0.2", 00:16:01.610 "adrfam": "ipv4", 00:16:01.610 "trsvcid": "4420", 00:16:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.610 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.610 "hdgst": false, 00:16:01.610 "ddgst": false 00:16:01.610 }, 00:16:01.610 "method": "bdev_nvme_attach_controller" 00:16:01.610 }' 00:16:01.610 [2024-10-04 06:33:54.197751] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:01.610 [2024-10-04 06:33:54.197860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85847 ] 00:16:01.869 [2024-10-04 06:33:54.339465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.869 [2024-10-04 06:33:54.431845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.127 Running I/O for 10 seconds... 00:16:12.095 00:16:12.095 Latency(us) 00:16:12.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.095 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:12.095 Verification LBA range: start 0x0 length 0x1000 00:16:12.095 Nvme1n1 : 10.01 10212.06 79.78 0.00 0.00 12503.46 1087.30 17039.36 00:16:12.095 =================================================================================================================== 00:16:12.095 Total : 10212.06 79.78 0.00 0.00 12503.46 1087.30 17039.36 00:16:12.354 06:34:04 -- target/zcopy.sh@39 -- # perfpid=85971 00:16:12.354 06:34:04 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:12.354 06:34:04 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:12.354 06:34:04 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:12.354 06:34:04 -- common/autotest_common.sh@10 -- # set +x 00:16:12.354 06:34:04 -- nvmf/common.sh@520 -- # config=() 00:16:12.354 06:34:04 -- nvmf/common.sh@520 -- # local subsystem config 00:16:12.354 06:34:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:12.354 06:34:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:12.354 { 00:16:12.354 "params": { 00:16:12.354 "name": "Nvme$subsystem", 00:16:12.354 "trtype": "$TEST_TRANSPORT", 00:16:12.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:12.354 "adrfam": "ipv4", 00:16:12.354 "trsvcid": "$NVMF_PORT", 00:16:12.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:12.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:12.354 "hdgst": ${hdgst:-false}, 00:16:12.354 "ddgst": ${ddgst:-false} 00:16:12.354 }, 00:16:12.354 "method": "bdev_nvme_attach_controller" 00:16:12.354 } 00:16:12.354 EOF 00:16:12.354 )") 00:16:12.354 06:34:04 -- nvmf/common.sh@542 -- # cat 00:16:12.354 [2024-10-04 06:34:04.823714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.354 [2024-10-04 06:34:04.823760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.354 06:34:04 -- nvmf/common.sh@544 -- # jq . 00:16:12.354 06:34:04 -- nvmf/common.sh@545 -- # IFS=, 00:16:12.354 06:34:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:12.354 "params": { 00:16:12.354 "name": "Nvme1", 00:16:12.354 "trtype": "tcp", 00:16:12.354 "traddr": "10.0.0.2", 00:16:12.354 "adrfam": "ipv4", 00:16:12.354 "trsvcid": "4420", 00:16:12.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.354 "hdgst": false, 00:16:12.354 "ddgst": false 00:16:12.354 }, 00:16:12.354 "method": "bdev_nvme_attach_controller" 00:16:12.354 }' 00:16:12.354 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.354 [2024-10-04 06:34:04.835725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.354 [2024-10-04 06:34:04.835771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.354 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.354 [2024-10-04 06:34:04.843656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.354 [2024-10-04 06:34:04.843684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.851667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.851696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 [2024-10-04 06:34:04.855701] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:12.355 [2024-10-04 06:34:04.855767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85971 ] 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.863681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.863880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.875700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.875885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.887685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.887880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.899685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.899880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.911672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.911861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.923675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.923856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.935693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.935927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.947680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.947874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.959700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.959728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.971712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.971739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.983716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.983740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:04.989317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.355 [2024-10-04 06:34:04.995705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:04.995734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:05.007717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:05.007745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:05.019715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:05.019742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.355 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.355 [2024-10-04 06:34:05.031719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.355 [2024-10-04 06:34:05.031748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.039719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.039744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.051735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.051765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.063732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.063762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.071717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.071921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.078782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.616 [2024-10-04 06:34:05.079742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.079760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.091744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.091788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.103756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.103789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.115762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.115796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.127790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.127846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.139773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.139859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.151766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.151807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.163780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.163846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.175767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.175794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.187799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.187871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.616 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.616 [2024-10-04 06:34:05.195782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.616 [2024-10-04 06:34:05.195814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.617 [2024-10-04 06:34:05.207806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.617 [2024-10-04 06:34:05.207861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.617 [2024-10-04 06:34:05.219794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.617 [2024-10-04 06:34:05.219849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.617 [2024-10-04 06:34:05.231802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.617 [2024-10-04 06:34:05.231870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.617 [2024-10-04 06:34:05.243915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.617 [2024-10-04 06:34:05.243962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.617 Running I/O for 5 seconds... 00:16:12.617 [2024-10-04 06:34:05.255890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.617 [2024-10-04 06:34:05.255932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.617 [2024-10-04 06:34:05.272702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.617 [2024-10-04 06:34:05.272753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.617 [2024-10-04 06:34:05.290301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.617 [2024-10-04 06:34:05.290337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.617 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.876 [2024-10-04 06:34:05.305204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.876 [2024-10-04 06:34:05.305237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.876 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.876 [2024-10-04 06:34:05.321262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.876 [2024-10-04 06:34:05.321314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.876 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.876 [2024-10-04 06:34:05.336591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.876 [2024-10-04 06:34:05.336642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.876 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.876 [2024-10-04 06:34:05.352068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.876 [2024-10-04 06:34:05.352117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.876 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.876 [2024-10-04 06:34:05.369010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.876 [2024-10-04 06:34:05.369057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.876 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.876 [2024-10-04 06:34:05.385939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.876 [2024-10-04 06:34:05.385986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.876 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.402541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.402574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.418590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.418638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.436274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.436324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.451143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.451180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.462227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.462275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.479387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.479451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.493150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.493185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.508745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.508792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.526090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.526138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.877 [2024-10-04 06:34:05.540790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.877 [2024-10-04 06:34:05.540837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.877 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.558077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.558126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.571818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.571893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.588616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.588664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.604513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.604562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.622039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.622088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.637650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.637683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.654753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.654803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.670349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.670398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.686515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.686563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.702588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.702637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.718638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.718687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.734810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.734889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.752523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.752573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.767103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.767140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.783555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.783591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.798549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.136 [2024-10-04 06:34:05.798586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.136 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.136 [2024-10-04 06:34:05.814065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.137 [2024-10-04 06:34:05.814100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.137 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.824474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.824521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.838765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.838811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.848181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.848217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.858088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.858136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.868348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.868394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.878159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.878204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.888316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.888363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.898258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.898304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.908712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.908760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.926399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.926450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.941803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.941878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.959812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.959893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.969558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.969607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.396 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.396 [2024-10-04 06:34:05.979257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.396 [2024-10-04 06:34:05.979322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:05.988435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:05.988482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:05.997807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:05.997881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:06.007165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:06.007203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:06.016441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:06.016488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:06.026136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:06.026185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:06.036052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:06.036100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:06.046462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:06.046512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:06.058430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:06.058483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.397 [2024-10-04 06:34:06.070127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.397 [2024-10-04 06:34:06.070179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.397 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.087501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.087548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.102237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.102286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.110816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.110891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.121782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.121856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.134349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.134402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.143672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.143721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.155444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.155491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.171881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.171924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.186738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.186787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.202209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.202257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.219216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.219269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.235712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.235761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.253939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.253972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.263965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.263998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.277887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.277920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.288013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.288045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.297929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.297961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.307832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.307875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.657 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.657 [2024-10-04 06:34:06.321345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.657 [2024-10-04 06:34:06.321377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.658 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.658 [2024-10-04 06:34:06.330748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.658 [2024-10-04 06:34:06.330782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.658 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.345392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.345426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.357361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.357396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.367075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.367112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.378631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.378683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.389303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.389352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.399764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.399813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.410610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.410660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.422726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.422775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.433985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.917 [2024-10-04 06:34:06.434037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.917 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.917 [2024-10-04 06:34:06.443031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.443068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.457151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.457198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.466611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.466645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.477007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.477054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.489104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.489153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.504498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.504546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.515875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.515934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.531662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.531710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.546948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.547002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.556495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.556543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.570101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.570150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.579765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.579841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.918 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:13.918 [2024-10-04 06:34:06.593324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.918 [2024-10-04 06:34:06.593373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.602715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.602763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.614516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.614564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.624460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.624507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.635398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.635477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.647483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.647529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.663626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.663674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.681226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.681278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.696266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.696317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.711613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.177 [2024-10-04 06:34:06.711660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.177 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.177 [2024-10-04 06:34:06.721428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.721461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.735075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.735130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.744510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.744558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.759331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.759480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.776469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.776594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.786090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.786215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.800448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.800571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.809809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.809959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.823590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.823716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.840566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.840615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.178 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.178 [2024-10-04 06:34:06.854877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.178 [2024-10-04 06:34:06.854908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.870073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.870106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.889877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.889928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.905012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.905190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.922436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.922615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.938812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.938973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.955626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.955807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.965941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.966101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.976579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.976772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.988920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.989078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:06.999218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:06.999272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:07.009445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:07.009473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.437 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.437 [2024-10-04 06:34:07.019964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.437 [2024-10-04 06:34:07.020007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.033035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.033064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.042662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.042707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.056114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.056153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.065947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.065992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.077148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.077178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.087285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.087362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.100875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.100930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.438 [2024-10-04 06:34:07.110302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.438 [2024-10-04 06:34:07.110346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.438 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.124306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.697 [2024-10-04 06:34:07.124351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.697 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.133623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.697 [2024-10-04 06:34:07.133668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.697 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.148296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.697 [2024-10-04 06:34:07.148341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.697 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.158499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.697 [2024-10-04 06:34:07.158543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.697 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.172304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.697 [2024-10-04 06:34:07.172365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.697 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.181652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.697 [2024-10-04 06:34:07.181696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.697 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.191655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.697 [2024-10-04 06:34:07.191684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.697 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.697 [2024-10-04 06:34:07.201601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.201647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.211487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.211531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.221193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.221253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.231157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.231189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.241453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.241498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.251068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.251099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.266231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.266276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.281493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.281525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.291040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.291072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.301517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.301548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.312053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.312083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.324718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.324750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.341807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.341861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.359421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.359465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.698 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.698 [2024-10-04 06:34:07.374963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.698 [2024-10-04 06:34:07.375017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.384194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.384254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.396995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.397026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.407364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.407409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.421569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.421615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.430905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.430949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.441306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.441339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.451736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.451780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.461913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.461956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.958 [2024-10-04 06:34:07.472050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.958 [2024-10-04 06:34:07.472080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.958 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.482059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.482087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.491933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.491991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.501444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.501488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.510760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.510790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.520712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.520757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.535073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.535105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.544509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.544554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.559330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.559361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.575499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.575544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.585195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.585239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.599931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.599962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.609542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.609586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.620286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.620320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:14.959 [2024-10-04 06:34:07.633258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.959 [2024-10-04 06:34:07.633306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.959 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.248 [2024-10-04 06:34:07.650623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.248 [2024-10-04 06:34:07.650656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.248 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.248 [2024-10-04 06:34:07.665065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.248 [2024-10-04 06:34:07.665112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.248 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.248 [2024-10-04 06:34:07.681326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.248 [2024-10-04 06:34:07.681372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.248 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.248 [2024-10-04 06:34:07.691819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.248 [2024-10-04 06:34:07.691890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.248 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.248 [2024-10-04 06:34:07.706321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.248 [2024-10-04 06:34:07.706365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.248 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.248 [2024-10-04 06:34:07.716257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.248 [2024-10-04 06:34:07.716303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.248 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.248 [2024-10-04 06:34:07.726843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.248 [2024-10-04 06:34:07.726894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.744150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.744183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.754444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.754476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.769371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.769418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.785766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.785811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.802674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.802719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.819754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.819799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.836688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.836734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.846787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.846862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.861398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.861431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.872794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.872882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.889154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.889200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.249 [2024-10-04 06:34:07.899242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.249 [2024-10-04 06:34:07.899289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.249 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.508 [2024-10-04 06:34:07.910106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.508 [2024-10-04 06:34:07.910152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.508 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.508 [2024-10-04 06:34:07.922540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.508 [2024-10-04 06:34:07.922569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.508 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.508 [2024-10-04 06:34:07.938595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.508 [2024-10-04 06:34:07.938640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.508 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.508 [2024-10-04 06:34:07.955997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.508 [2024-10-04 06:34:07.956041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.508 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.508 [2024-10-04 06:34:07.967194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.508 [2024-10-04 06:34:07.967227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.508 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.508 [2024-10-04 06:34:07.983521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.508 [2024-10-04 06:34:07.983568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.508 2024/10/04 06:34:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:07.999142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:07.999173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.009066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.009110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.022714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.022759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.030911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.030937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.042733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.042761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.052345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.052388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.062037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.062082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.071666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.071710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.081428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.081473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.092127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.092175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.104585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.104630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.121363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.121407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.137909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.137965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.155454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.155499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.169544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.169588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.509 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.509 [2024-10-04 06:34:08.186722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.509 [2024-10-04 06:34:08.186767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.197102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.197147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.210974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.211026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.220057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.220085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.231940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.231977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.243509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.243554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.251836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.251892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.264070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.264113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.273671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.273715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.286918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.286961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.295403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.295448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.305713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.305740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.319694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.319739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.335360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.335406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.352531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.352576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.363739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.363784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.380011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.380056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.391288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.391364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.407939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.407967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.423190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.423221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:15.768 [2024-10-04 06:34:08.438184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:15.768 [2024-10-04 06:34:08.438214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:15.768 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.447355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.447401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.459296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.459388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.470338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.470382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.478613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.478657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.490884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.490961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.501256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.501300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.512118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.512149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.524707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.524753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.541069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.541099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.557583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.557629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.574659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.574707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.590721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.590766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.027 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.027 [2024-10-04 06:34:08.606355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.027 [2024-10-04 06:34:08.606399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.616190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.616234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.629896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.629924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.638678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.638723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.649656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.649686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.660918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.660962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.668927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.668970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.683749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.683793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.692293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.692337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.028 [2024-10-04 06:34:08.702401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.028 [2024-10-04 06:34:08.702446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.028 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.287 [2024-10-04 06:34:08.716029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.287 [2024-10-04 06:34:08.716058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.287 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.287 [2024-10-04 06:34:08.724590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.287 [2024-10-04 06:34:08.724635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.287 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.287 [2024-10-04 06:34:08.739877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.287 [2024-10-04 06:34:08.739936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.287 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.287 [2024-10-04 06:34:08.749510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.287 [2024-10-04 06:34:08.749554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.287 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.287 [2024-10-04 06:34:08.760039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.287 [2024-10-04 06:34:08.760072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.770459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.770503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.788133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.788167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.804058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.804088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.813941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.813987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.823818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.823889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.833833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.833907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.843667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.843711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.854083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.854112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.863840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.863912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.874357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.874402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.886692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.886737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.896078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.896123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.908668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.908712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.925840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.925897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.940450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.940496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.288 [2024-10-04 06:34:08.957346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.288 [2024-10-04 06:34:08.957392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.288 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:08.973704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:08.973749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:08.991710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:08.991755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.006662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.006706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.022856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.022880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.039548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.039592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.056097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.056141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.067436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.067481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.084080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.084111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.093976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.094021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.547 [2024-10-04 06:34:09.107786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.547 [2024-10-04 06:34:09.107840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.547 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.116244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.116288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.126766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.126811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.136492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.136536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.147527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.147572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.157096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.157140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.167019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.167049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.181405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.181449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.190729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.190789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.206613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.206658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.548 [2024-10-04 06:34:09.215600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.548 [2024-10-04 06:34:09.215661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.548 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.228164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.228225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.238380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.238424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.252504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.252549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.262107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.262139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.277685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.277729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.294977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.295047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.312031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.312076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.327503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.327548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.336591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.336636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.807 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.807 [2024-10-04 06:34:09.349797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.807 [2024-10-04 06:34:09.349852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.365413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.365458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.382501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.382546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.398438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.398483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.416206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.416268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.431400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.431445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.440571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.440617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.452273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.452318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.463548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.463593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.472455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.472499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:16.808 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:16.808 [2024-10-04 06:34:09.483468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:16.808 [2024-10-04 06:34:09.483514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.067 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.067 [2024-10-04 06:34:09.500634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.067 [2024-10-04 06:34:09.500667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.067 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.067 [2024-10-04 06:34:09.517600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.067 [2024-10-04 06:34:09.517631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.067 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.067 [2024-10-04 06:34:09.533315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.067 [2024-10-04 06:34:09.533361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.067 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.067 [2024-10-04 06:34:09.549099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.067 [2024-10-04 06:34:09.549145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.566004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.566047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.582679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.582724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.599196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.599228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.616442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.616486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.632984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.633013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.650665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.650714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.665297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.665342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.682419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.682464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.696681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.696725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.712888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.712932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.728601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.728646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.068 [2024-10-04 06:34:09.739876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.068 [2024-10-04 06:34:09.739937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.068 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.757488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.757521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.772485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.772529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.789385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.789429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.805734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.805780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.822570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.822615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.839402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.839434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.855448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.855492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.873243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.873286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.883400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.883445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.896871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.896898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.905621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.905667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.919151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.919182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.927647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.927690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.941667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.941712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.328 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.328 [2024-10-04 06:34:09.951553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.328 [2024-10-04 06:34:09.951585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.329 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.329 [2024-10-04 06:34:09.961878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.329 [2024-10-04 06:34:09.961922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.329 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.329 [2024-10-04 06:34:09.973685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.329 [2024-10-04 06:34:09.973714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.329 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.329 [2024-10-04 06:34:09.982442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.329 [2024-10-04 06:34:09.982485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.329 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.329 [2024-10-04 06:34:09.994001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.329 [2024-10-04 06:34:09.994030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.329 2024/10/04 06:34:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.329 [2024-10-04 06:34:10.003688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.329 [2024-10-04 06:34:10.003721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.329 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.018351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.018389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.034691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.034735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.052107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.052152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.067802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.067872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.079698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.079743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.096603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.096648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.107441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.107484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.121928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.121974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.139532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.139577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.154360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.588 [2024-10-04 06:34:10.154405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.588 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.588 [2024-10-04 06:34:10.170394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.589 [2024-10-04 06:34:10.170438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.589 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.589 [2024-10-04 06:34:10.186859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.589 [2024-10-04 06:34:10.186892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.589 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.589 [2024-10-04 06:34:10.203737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.589 [2024-10-04 06:34:10.203782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.589 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.589 [2024-10-04 06:34:10.220493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.589 [2024-10-04 06:34:10.220524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.589 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.589 [2024-10-04 06:34:10.236884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.589 [2024-10-04 06:34:10.236928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.589 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.589 [2024-10-04 06:34:10.253478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.589 [2024-10-04 06:34:10.253523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.589 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.589 00:16:17.589 Latency(us) 00:16:17.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.589 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:17.589 Nvme1n1 : 5.01 12597.50 98.42 0.00 0.00 10149.39 4319.42 19541.64 00:16:17.589 =================================================================================================================== 00:16:17.589 Total : 12597.50 98.42 0.00 0.00 10149.39 4319.42 19541.64 00:16:17.589 [2024-10-04 06:34:10.265005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.589 [2024-10-04 06:34:10.265033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.848 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.848 [2024-10-04 06:34:10.277000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.848 [2024-10-04 06:34:10.277028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.848 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.848 [2024-10-04 06:34:10.289028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.848 [2024-10-04 06:34:10.289062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.848 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.848 [2024-10-04 06:34:10.301019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.848 [2024-10-04 06:34:10.301052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.848 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.848 [2024-10-04 06:34:10.313019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.848 [2024-10-04 06:34:10.313051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.848 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.848 [2024-10-04 06:34:10.325024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.848 [2024-10-04 06:34:10.325056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.848 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.848 [2024-10-04 06:34:10.337032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.848 [2024-10-04 06:34:10.337068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.349027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.349059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.361029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.361061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.373035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.373067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.385039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.385084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.397029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.397055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.409064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.409100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.421070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.421115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.433068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.433107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.445090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.445151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 [2024-10-04 06:34:10.457060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:17.849 [2024-10-04 06:34:10.457096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.849 2024/10/04 06:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.849 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (85971) - No such process 00:16:17.849 06:34:10 -- target/zcopy.sh@49 -- # wait 85971 00:16:17.849 06:34:10 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.849 06:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.849 06:34:10 -- common/autotest_common.sh@10 -- # set +x 00:16:17.849 06:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.849 06:34:10 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:17.849 06:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.849 06:34:10 -- common/autotest_common.sh@10 -- # set +x 00:16:17.849 delay0 00:16:17.849 06:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.849 06:34:10 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:17.849 06:34:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.849 06:34:10 -- common/autotest_common.sh@10 -- # set +x 00:16:17.849 06:34:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.849 06:34:10 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:18.108 [2024-10-04 06:34:10.641757] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:24.672 Initializing NVMe Controllers 00:16:24.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:24.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:24.672 Initialization complete. Launching workers. 00:16:24.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 171 00:16:24.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 458, failed to submit 33 00:16:24.672 success 285, unsuccess 173, failed 0 00:16:24.672 06:34:16 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:24.672 06:34:16 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:24.672 06:34:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:24.672 06:34:16 -- nvmf/common.sh@116 -- # sync 00:16:24.672 06:34:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:24.672 06:34:16 -- nvmf/common.sh@119 -- # set +e 00:16:24.672 06:34:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:24.672 06:34:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:24.672 rmmod nvme_tcp 00:16:24.672 rmmod nvme_fabrics 00:16:24.672 rmmod nvme_keyring 00:16:24.672 06:34:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:24.672 06:34:16 -- nvmf/common.sh@123 -- # set -e 00:16:24.672 06:34:16 -- nvmf/common.sh@124 -- # return 0 00:16:24.672 06:34:16 -- nvmf/common.sh@477 -- # '[' -n 85796 ']' 00:16:24.672 06:34:16 -- nvmf/common.sh@478 -- # killprocess 85796 00:16:24.672 06:34:16 -- common/autotest_common.sh@926 -- # '[' -z 85796 ']' 00:16:24.672 06:34:16 -- common/autotest_common.sh@930 -- # kill -0 85796 00:16:24.672 06:34:16 -- common/autotest_common.sh@931 -- # uname 00:16:24.672 06:34:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:24.672 06:34:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85796 00:16:24.672 killing process with pid 85796 00:16:24.672 06:34:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:24.672 06:34:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:24.672 06:34:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85796' 00:16:24.672 06:34:16 -- common/autotest_common.sh@945 -- # kill 85796 00:16:24.672 06:34:16 -- common/autotest_common.sh@950 -- # wait 85796 00:16:24.672 06:34:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:24.672 06:34:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:24.672 06:34:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:24.672 06:34:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.672 06:34:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:24.672 06:34:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.672 06:34:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.672 06:34:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.672 06:34:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:24.672 00:16:24.672 real 0m24.720s 00:16:24.672 user 0m39.829s 00:16:24.672 sys 0m6.567s 00:16:24.672 ************************************ 00:16:24.672 END TEST nvmf_zcopy 00:16:24.672 ************************************ 00:16:24.672 06:34:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.672 06:34:17 -- common/autotest_common.sh@10 -- # set +x 00:16:24.672 06:34:17 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:24.672 06:34:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:24.672 06:34:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:24.672 06:34:17 -- common/autotest_common.sh@10 -- # set +x 00:16:24.672 ************************************ 00:16:24.672 START TEST nvmf_nmic 00:16:24.672 ************************************ 00:16:24.672 06:34:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:24.672 * Looking for test storage... 00:16:24.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:24.672 06:34:17 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.672 06:34:17 -- nvmf/common.sh@7 -- # uname -s 00:16:24.672 06:34:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.672 06:34:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.672 06:34:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.672 06:34:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.672 06:34:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.672 06:34:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.672 06:34:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.672 06:34:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.672 06:34:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.672 06:34:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.672 06:34:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:24.672 06:34:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:24.672 06:34:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.672 06:34:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.672 06:34:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.672 06:34:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.672 06:34:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.672 06:34:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.672 06:34:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.931 06:34:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.931 06:34:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.931 06:34:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.931 06:34:17 -- paths/export.sh@5 -- # export PATH 00:16:24.931 06:34:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.931 06:34:17 -- nvmf/common.sh@46 -- # : 0 00:16:24.931 06:34:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:24.931 06:34:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:24.931 06:34:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:24.931 06:34:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.931 06:34:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.931 06:34:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:24.931 06:34:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:24.931 06:34:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:24.931 06:34:17 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.931 06:34:17 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.931 06:34:17 -- target/nmic.sh@14 -- # nvmftestinit 00:16:24.931 06:34:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:24.931 06:34:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.931 06:34:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:24.931 06:34:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:24.931 06:34:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:24.931 06:34:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.931 06:34:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.931 06:34:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.932 06:34:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:24.932 06:34:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:24.932 06:34:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:24.932 06:34:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:24.932 06:34:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:24.932 06:34:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:24.932 06:34:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.932 06:34:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.932 06:34:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:24.932 06:34:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:24.932 06:34:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.932 06:34:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.932 06:34:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.932 06:34:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.932 06:34:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.932 06:34:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.932 06:34:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.932 06:34:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.932 06:34:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:24.932 06:34:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:24.932 Cannot find device "nvmf_tgt_br" 00:16:24.932 06:34:17 -- nvmf/common.sh@154 -- # true 00:16:24.932 06:34:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.932 Cannot find device "nvmf_tgt_br2" 00:16:24.932 06:34:17 -- nvmf/common.sh@155 -- # true 00:16:24.932 06:34:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:24.932 06:34:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:24.932 Cannot find device "nvmf_tgt_br" 00:16:24.932 06:34:17 -- nvmf/common.sh@157 -- # true 00:16:24.932 06:34:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:24.932 Cannot find device "nvmf_tgt_br2" 00:16:24.932 06:34:17 -- nvmf/common.sh@158 -- # true 00:16:24.932 06:34:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:24.932 06:34:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:24.932 06:34:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.932 06:34:17 -- nvmf/common.sh@161 -- # true 00:16:24.932 06:34:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.932 06:34:17 -- nvmf/common.sh@162 -- # true 00:16:24.932 06:34:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.932 06:34:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.932 06:34:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.932 06:34:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.932 06:34:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.932 06:34:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.191 06:34:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.191 06:34:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.191 06:34:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.191 06:34:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:25.191 06:34:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:25.191 06:34:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:25.191 06:34:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:25.191 06:34:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.191 06:34:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.191 06:34:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.191 06:34:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:25.191 06:34:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:25.191 06:34:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.191 06:34:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.191 06:34:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.191 06:34:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.191 06:34:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.191 06:34:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:25.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:25.191 00:16:25.191 --- 10.0.0.2 ping statistics --- 00:16:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.191 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:25.191 06:34:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:25.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:25.191 00:16:25.191 --- 10.0.0.3 ping statistics --- 00:16:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.191 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:25.191 06:34:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:16:25.191 00:16:25.191 --- 10.0.0.1 ping statistics --- 00:16:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.191 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:25.191 06:34:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.191 06:34:17 -- nvmf/common.sh@421 -- # return 0 00:16:25.191 06:34:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:25.191 06:34:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.191 06:34:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:25.191 06:34:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:25.191 06:34:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.191 06:34:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:25.191 06:34:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:25.191 06:34:17 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:25.191 06:34:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:25.191 06:34:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:25.191 06:34:17 -- common/autotest_common.sh@10 -- # set +x 00:16:25.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.191 06:34:17 -- nvmf/common.sh@469 -- # nvmfpid=86283 00:16:25.191 06:34:17 -- nvmf/common.sh@470 -- # waitforlisten 86283 00:16:25.191 06:34:17 -- common/autotest_common.sh@819 -- # '[' -z 86283 ']' 00:16:25.191 06:34:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:25.191 06:34:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.191 06:34:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:25.191 06:34:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.191 06:34:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:25.191 06:34:17 -- common/autotest_common.sh@10 -- # set +x 00:16:25.191 [2024-10-04 06:34:17.845887] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:25.191 [2024-10-04 06:34:17.846205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.450 [2024-10-04 06:34:17.982338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.450 [2024-10-04 06:34:18.066081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:25.450 [2024-10-04 06:34:18.066276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.450 [2024-10-04 06:34:18.066288] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.450 [2024-10-04 06:34:18.066296] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.450 [2024-10-04 06:34:18.066876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.450 [2024-10-04 06:34:18.067039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.450 [2024-10-04 06:34:18.067196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.450 [2024-10-04 06:34:18.067124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.385 06:34:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:26.385 06:34:18 -- common/autotest_common.sh@852 -- # return 0 00:16:26.385 06:34:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:26.385 06:34:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:26.385 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.385 06:34:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.385 06:34:18 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.385 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.385 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.385 [2024-10-04 06:34:18.872038] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.385 06:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.385 06:34:18 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:26.385 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.385 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.385 Malloc0 00:16:26.386 06:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.386 06:34:18 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:26.386 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.386 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.386 06:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.386 06:34:18 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.386 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.386 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.386 06:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.386 06:34:18 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.386 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.386 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.386 [2024-10-04 06:34:18.954241] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.386 test case1: single bdev can't be used in multiple subsystems 00:16:26.386 06:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.386 06:34:18 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:26.386 06:34:18 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:26.386 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.386 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.386 06:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.386 06:34:18 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:26.386 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.386 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.386 06:34:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.386 06:34:18 -- target/nmic.sh@28 -- # nmic_status=0 00:16:26.386 06:34:18 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:26.386 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.386 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.386 [2024-10-04 06:34:18.982086] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:26.386 [2024-10-04 06:34:18.982125] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:26.386 [2024-10-04 06:34:18.982138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.386 2024/10/04 06:34:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.386 request: 00:16:26.386 { 00:16:26.386 "method": "nvmf_subsystem_add_ns", 00:16:26.386 "params": { 00:16:26.386 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:26.386 "namespace": { 00:16:26.386 "bdev_name": "Malloc0" 00:16:26.386 } 00:16:26.386 } 00:16:26.386 } 00:16:26.386 Got JSON-RPC error response 00:16:26.386 GoRPCClient: error on JSON-RPC call 00:16:26.386 06:34:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:26.386 06:34:18 -- target/nmic.sh@29 -- # nmic_status=1 00:16:26.386 06:34:18 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:26.386 06:34:18 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:26.386 Adding namespace failed - expected result. 00:16:26.386 06:34:18 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:26.386 test case2: host connect to nvmf target in multiple paths 00:16:26.386 06:34:18 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:26.386 06:34:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.386 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:16:26.386 [2024-10-04 06:34:18.998272] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:26.386 06:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.386 06:34:19 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.645 06:34:19 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:26.902 06:34:19 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.902 06:34:19 -- common/autotest_common.sh@1177 -- # local i=0 00:16:26.902 06:34:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.902 06:34:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:26.902 06:34:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:28.804 06:34:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:28.804 06:34:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:28.804 06:34:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.804 06:34:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:28.804 06:34:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.804 06:34:21 -- common/autotest_common.sh@1187 -- # return 0 00:16:28.804 06:34:21 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:28.804 [global] 00:16:28.804 thread=1 00:16:28.804 invalidate=1 00:16:28.804 rw=write 00:16:28.804 time_based=1 00:16:28.804 runtime=1 00:16:28.804 ioengine=libaio 00:16:28.804 direct=1 00:16:28.804 bs=4096 00:16:28.804 iodepth=1 00:16:28.804 norandommap=0 00:16:28.804 numjobs=1 00:16:28.804 00:16:28.804 verify_dump=1 00:16:28.804 verify_backlog=512 00:16:28.804 verify_state_save=0 00:16:28.804 do_verify=1 00:16:28.804 verify=crc32c-intel 00:16:28.804 [job0] 00:16:28.804 filename=/dev/nvme0n1 00:16:28.804 Could not set queue depth (nvme0n1) 00:16:29.062 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.062 fio-3.35 00:16:29.062 Starting 1 thread 00:16:29.998 00:16:29.998 job0: (groupid=0, jobs=1): err= 0: pid=86395: Fri Oct 4 06:34:22 2024 00:16:29.998 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:29.998 slat (nsec): min=12318, max=88373, avg=14937.62, stdev=4775.46 00:16:29.998 clat (usec): min=119, max=530, avg=153.32, stdev=19.41 00:16:29.998 lat (usec): min=132, max=543, avg=168.26, stdev=20.23 00:16:29.998 clat percentiles (usec): 00:16:29.998 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:16:29.998 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:16:29.998 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 188], 00:16:29.998 | 99.00th=[ 210], 99.50th=[ 225], 99.90th=[ 281], 99.95th=[ 347], 00:16:29.998 | 99.99th=[ 529] 00:16:29.998 write: IOPS=3563, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec); 0 zone resets 00:16:29.998 slat (usec): min=18, max=139, avg=23.83, stdev= 7.01 00:16:29.998 clat (usec): min=85, max=238, avg=108.33, stdev=13.44 00:16:29.998 lat (usec): min=105, max=328, avg=132.16, stdev=16.16 00:16:29.998 clat percentiles (usec): 00:16:29.998 | 1.00th=[ 91], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:16:29.998 | 30.00th=[ 101], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 109], 00:16:29.998 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 135], 00:16:29.998 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 190], 99.95th=[ 227], 00:16:29.998 | 99.99th=[ 239] 00:16:29.998 bw ( KiB/s): min=13680, max=13680, per=95.97%, avg=13680.00, stdev= 0.00, samples=1 00:16:29.998 iops : min= 3420, max= 3420, avg=3420.00, stdev= 0.00, samples=1 00:16:29.998 lat (usec) : 100=14.81%, 250=85.10%, 500=0.08%, 750=0.02% 00:16:29.998 cpu : usr=2.20%, sys=9.90%, ctx=6639, majf=0, minf=5 00:16:29.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.998 issued rwts: total=3072,3567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.998 00:16:29.998 Run status group 0 (all jobs): 00:16:29.998 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:16:29.998 WRITE: bw=13.9MiB/s (14.6MB/s), 13.9MiB/s-13.9MiB/s (14.6MB/s-14.6MB/s), io=13.9MiB (14.6MB), run=1001-1001msec 00:16:29.998 00:16:29.998 Disk stats (read/write): 00:16:29.998 nvme0n1: ios=2903/3072, merge=0/0, ticks=493/409, in_queue=902, util=91.38% 00:16:30.257 06:34:22 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:30.257 06:34:22 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.257 06:34:22 -- common/autotest_common.sh@1198 -- # local i=0 00:16:30.257 06:34:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:30.257 06:34:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.257 06:34:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:30.257 06:34:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.257 06:34:22 -- common/autotest_common.sh@1210 -- # return 0 00:16:30.257 06:34:22 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:30.257 06:34:22 -- target/nmic.sh@53 -- # nvmftestfini 00:16:30.257 06:34:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:30.257 06:34:22 -- nvmf/common.sh@116 -- # sync 00:16:30.257 06:34:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:30.257 06:34:22 -- nvmf/common.sh@119 -- # set +e 00:16:30.257 06:34:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:30.257 06:34:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:30.517 rmmod nvme_tcp 00:16:30.517 rmmod nvme_fabrics 00:16:30.517 rmmod nvme_keyring 00:16:30.517 06:34:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:30.517 06:34:22 -- nvmf/common.sh@123 -- # set -e 00:16:30.517 06:34:22 -- nvmf/common.sh@124 -- # return 0 00:16:30.517 06:34:22 -- nvmf/common.sh@477 -- # '[' -n 86283 ']' 00:16:30.517 06:34:22 -- nvmf/common.sh@478 -- # killprocess 86283 00:16:30.517 06:34:22 -- common/autotest_common.sh@926 -- # '[' -z 86283 ']' 00:16:30.517 06:34:22 -- common/autotest_common.sh@930 -- # kill -0 86283 00:16:30.517 06:34:22 -- common/autotest_common.sh@931 -- # uname 00:16:30.517 06:34:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.517 06:34:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86283 00:16:30.517 killing process with pid 86283 00:16:30.517 06:34:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:30.517 06:34:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:30.517 06:34:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86283' 00:16:30.517 06:34:23 -- common/autotest_common.sh@945 -- # kill 86283 00:16:30.517 06:34:23 -- common/autotest_common.sh@950 -- # wait 86283 00:16:30.775 06:34:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:30.776 06:34:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:30.776 06:34:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:30.776 06:34:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.776 06:34:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:30.776 06:34:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.776 06:34:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.776 06:34:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.776 06:34:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:30.776 ************************************ 00:16:30.776 END TEST nvmf_nmic 00:16:30.776 ************************************ 00:16:30.776 00:16:30.776 real 0m6.049s 00:16:30.776 user 0m20.460s 00:16:30.776 sys 0m1.259s 00:16:30.776 06:34:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.776 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:16:30.776 06:34:23 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:30.776 06:34:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:30.776 06:34:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.776 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:16:30.776 ************************************ 00:16:30.776 START TEST nvmf_fio_target 00:16:30.776 ************************************ 00:16:30.776 06:34:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:30.776 * Looking for test storage... 00:16:30.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:30.776 06:34:23 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.776 06:34:23 -- nvmf/common.sh@7 -- # uname -s 00:16:30.776 06:34:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.776 06:34:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.776 06:34:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.776 06:34:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.776 06:34:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.776 06:34:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.776 06:34:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.776 06:34:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.776 06:34:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.776 06:34:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.776 06:34:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:30.776 06:34:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:30.776 06:34:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.776 06:34:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.776 06:34:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.776 06:34:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.776 06:34:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.776 06:34:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.776 06:34:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.776 06:34:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 06:34:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 06:34:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 06:34:23 -- paths/export.sh@5 -- # export PATH 00:16:30.776 06:34:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.776 06:34:23 -- nvmf/common.sh@46 -- # : 0 00:16:30.776 06:34:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:30.776 06:34:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:30.776 06:34:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:30.776 06:34:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.776 06:34:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.776 06:34:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:30.776 06:34:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:30.776 06:34:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:30.776 06:34:23 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.776 06:34:23 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.776 06:34:23 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.776 06:34:23 -- target/fio.sh@16 -- # nvmftestinit 00:16:30.776 06:34:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:30.776 06:34:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.776 06:34:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:30.776 06:34:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:30.776 06:34:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:30.776 06:34:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.776 06:34:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.776 06:34:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.035 06:34:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:31.035 06:34:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:31.035 06:34:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:31.035 06:34:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:31.035 06:34:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:31.035 06:34:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:31.035 06:34:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.035 06:34:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.035 06:34:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:31.035 06:34:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:31.035 06:34:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:31.035 06:34:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:31.035 06:34:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:31.035 06:34:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.035 06:34:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:31.035 06:34:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:31.035 06:34:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:31.035 06:34:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:31.035 06:34:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:31.035 06:34:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:31.035 Cannot find device "nvmf_tgt_br" 00:16:31.035 06:34:23 -- nvmf/common.sh@154 -- # true 00:16:31.035 06:34:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.035 Cannot find device "nvmf_tgt_br2" 00:16:31.035 06:34:23 -- nvmf/common.sh@155 -- # true 00:16:31.035 06:34:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:31.035 06:34:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:31.035 Cannot find device "nvmf_tgt_br" 00:16:31.035 06:34:23 -- nvmf/common.sh@157 -- # true 00:16:31.035 06:34:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:31.035 Cannot find device "nvmf_tgt_br2" 00:16:31.035 06:34:23 -- nvmf/common.sh@158 -- # true 00:16:31.035 06:34:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:31.035 06:34:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:31.035 06:34:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.035 06:34:23 -- nvmf/common.sh@161 -- # true 00:16:31.035 06:34:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.035 06:34:23 -- nvmf/common.sh@162 -- # true 00:16:31.035 06:34:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:31.035 06:34:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:31.035 06:34:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:31.035 06:34:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:31.035 06:34:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:31.035 06:34:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:31.035 06:34:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:31.035 06:34:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:31.035 06:34:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:31.035 06:34:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:31.035 06:34:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:31.035 06:34:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:31.035 06:34:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:31.035 06:34:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:31.035 06:34:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:31.035 06:34:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:31.035 06:34:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:31.035 06:34:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:31.035 06:34:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.294 06:34:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.294 06:34:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.294 06:34:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.294 06:34:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.294 06:34:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:31.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:31.294 00:16:31.294 --- 10.0.0.2 ping statistics --- 00:16:31.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.294 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:31.294 06:34:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:31.294 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:16:31.294 00:16:31.294 --- 10.0.0.3 ping statistics --- 00:16:31.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.294 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:31.294 06:34:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:31.294 00:16:31.294 --- 10.0.0.1 ping statistics --- 00:16:31.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.294 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:31.294 06:34:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.294 06:34:23 -- nvmf/common.sh@421 -- # return 0 00:16:31.294 06:34:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.294 06:34:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.294 06:34:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.294 06:34:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.294 06:34:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.294 06:34:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.294 06:34:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.294 06:34:23 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:31.294 06:34:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.294 06:34:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:31.294 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 06:34:23 -- nvmf/common.sh@469 -- # nvmfpid=86576 00:16:31.294 06:34:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:31.294 06:34:23 -- nvmf/common.sh@470 -- # waitforlisten 86576 00:16:31.294 06:34:23 -- common/autotest_common.sh@819 -- # '[' -z 86576 ']' 00:16:31.294 06:34:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.294 06:34:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.294 06:34:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.294 06:34:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.294 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 [2024-10-04 06:34:23.857969] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:31.294 [2024-10-04 06:34:23.858067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.591 [2024-10-04 06:34:23.997143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.591 [2024-10-04 06:34:24.062899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.591 [2024-10-04 06:34:24.063057] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.591 [2024-10-04 06:34:24.063070] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.591 [2024-10-04 06:34:24.063079] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.591 [2024-10-04 06:34:24.063162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.591 [2024-10-04 06:34:24.063326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.591 [2024-10-04 06:34:24.063960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.591 [2024-10-04 06:34:24.063966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.166 06:34:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.166 06:34:24 -- common/autotest_common.sh@852 -- # return 0 00:16:32.166 06:34:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.166 06:34:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:32.166 06:34:24 -- common/autotest_common.sh@10 -- # set +x 00:16:32.425 06:34:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.425 06:34:24 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:32.684 [2024-10-04 06:34:25.144286] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.684 06:34:25 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:32.943 06:34:25 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:32.943 06:34:25 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:33.202 06:34:25 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:33.202 06:34:25 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:33.461 06:34:25 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:33.461 06:34:25 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:33.720 06:34:26 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:33.720 06:34:26 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:33.978 06:34:26 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.237 06:34:26 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:34.237 06:34:26 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.498 06:34:27 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:34.498 06:34:27 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:34.757 06:34:27 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:34.757 06:34:27 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:35.016 06:34:27 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:35.275 06:34:27 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:35.275 06:34:27 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.534 06:34:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:35.534 06:34:28 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:35.793 06:34:28 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.052 [2024-10-04 06:34:28.583096] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.052 06:34:28 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:36.311 06:34:28 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:36.570 06:34:29 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.570 06:34:29 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:36.570 06:34:29 -- common/autotest_common.sh@1177 -- # local i=0 00:16:36.570 06:34:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.570 06:34:29 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:36.570 06:34:29 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:36.570 06:34:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:39.104 06:34:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:39.104 06:34:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:39.104 06:34:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.104 06:34:31 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:39.104 06:34:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.104 06:34:31 -- common/autotest_common.sh@1187 -- # return 0 00:16:39.104 06:34:31 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:39.104 [global] 00:16:39.104 thread=1 00:16:39.104 invalidate=1 00:16:39.104 rw=write 00:16:39.104 time_based=1 00:16:39.104 runtime=1 00:16:39.104 ioengine=libaio 00:16:39.104 direct=1 00:16:39.104 bs=4096 00:16:39.104 iodepth=1 00:16:39.104 norandommap=0 00:16:39.104 numjobs=1 00:16:39.104 00:16:39.104 verify_dump=1 00:16:39.104 verify_backlog=512 00:16:39.104 verify_state_save=0 00:16:39.104 do_verify=1 00:16:39.104 verify=crc32c-intel 00:16:39.104 [job0] 00:16:39.104 filename=/dev/nvme0n1 00:16:39.104 [job1] 00:16:39.104 filename=/dev/nvme0n2 00:16:39.104 [job2] 00:16:39.104 filename=/dev/nvme0n3 00:16:39.104 [job3] 00:16:39.104 filename=/dev/nvme0n4 00:16:39.104 Could not set queue depth (nvme0n1) 00:16:39.104 Could not set queue depth (nvme0n2) 00:16:39.104 Could not set queue depth (nvme0n3) 00:16:39.104 Could not set queue depth (nvme0n4) 00:16:39.104 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.104 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.104 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.104 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:39.104 fio-3.35 00:16:39.104 Starting 4 threads 00:16:40.041 00:16:40.041 job0: (groupid=0, jobs=1): err= 0: pid=86875: Fri Oct 4 06:34:32 2024 00:16:40.041 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:40.042 slat (usec): min=10, max=433, avg=16.28, stdev=11.73 00:16:40.042 clat (usec): min=12, max=1138, avg=339.48, stdev=95.06 00:16:40.042 lat (usec): min=177, max=1182, avg=355.76, stdev=94.48 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 231], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 277], 00:16:40.042 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 00:16:40.042 | 70.00th=[ 326], 80.00th=[ 433], 90.00th=[ 490], 95.00th=[ 523], 00:16:40.042 | 99.00th=[ 603], 99.50th=[ 709], 99.90th=[ 930], 99.95th=[ 1139], 00:16:40.042 | 99.99th=[ 1139] 00:16:40.042 write: IOPS=1749, BW=6997KiB/s (7165kB/s)(7004KiB/1001msec); 0 zone resets 00:16:40.042 slat (usec): min=18, max=149, avg=28.09, stdev= 7.98 00:16:40.042 clat (usec): min=97, max=2457, avg=226.99, stdev=84.05 00:16:40.042 lat (usec): min=116, max=2485, avg=255.08, stdev=84.30 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 125], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 200], 00:16:40.042 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:16:40.042 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 277], 00:16:40.042 | 99.00th=[ 318], 99.50th=[ 400], 99.90th=[ 2442], 99.95th=[ 2442], 00:16:40.042 | 99.99th=[ 2442] 00:16:40.042 bw ( KiB/s): min= 8192, max= 8192, per=23.40%, avg=8192.00, stdev= 0.00, samples=1 00:16:40.042 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:40.042 lat (usec) : 20=0.03%, 100=0.03%, 250=44.57%, 500=51.63%, 750=3.47% 00:16:40.042 lat (usec) : 1000=0.18% 00:16:40.042 lat (msec) : 2=0.03%, 4=0.06% 00:16:40.042 cpu : usr=1.70%, sys=5.30%, ctx=3290, majf=0, minf=9 00:16:40.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 issued rwts: total=1536,1751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.042 job1: (groupid=0, jobs=1): err= 0: pid=86876: Fri Oct 4 06:34:32 2024 00:16:40.042 read: IOPS=1553, BW=6214KiB/s (6363kB/s)(6220KiB/1001msec) 00:16:40.042 slat (nsec): min=12833, max=64717, avg=18057.27, stdev=5689.63 00:16:40.042 clat (usec): min=159, max=3146, avg=284.71, stdev=82.82 00:16:40.042 lat (usec): min=181, max=3195, avg=302.76, stdev=84.12 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 206], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 255], 00:16:40.042 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:16:40.042 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 334], 00:16:40.042 | 99.00th=[ 388], 99.50th=[ 482], 99.90th=[ 676], 99.95th=[ 3163], 00:16:40.042 | 99.99th=[ 3163] 00:16:40.042 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:40.042 slat (usec): min=19, max=114, avg=28.00, stdev= 8.38 00:16:40.042 clat (usec): min=97, max=8198, avg=226.59, stdev=183.51 00:16:40.042 lat (usec): min=121, max=8220, avg=254.59, stdev=183.70 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 120], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 200], 00:16:40.042 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:16:40.042 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 269], 00:16:40.042 | 99.00th=[ 310], 99.50th=[ 367], 99.90th=[ 848], 99.95th=[ 1876], 00:16:40.042 | 99.99th=[ 8225] 00:16:40.042 bw ( KiB/s): min= 8192, max= 8192, per=23.40%, avg=8192.00, stdev= 0.00, samples=1 00:16:40.042 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:40.042 lat (usec) : 100=0.03%, 250=54.82%, 500=44.82%, 750=0.22%, 1000=0.03% 00:16:40.042 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03% 00:16:40.042 cpu : usr=1.40%, sys=6.80%, ctx=3603, majf=0, minf=10 00:16:40.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 issued rwts: total=1555,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.042 job2: (groupid=0, jobs=1): err= 0: pid=86877: Fri Oct 4 06:34:32 2024 00:16:40.042 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:40.042 slat (nsec): min=13491, max=67562, avg=16377.84, stdev=4413.46 00:16:40.042 clat (usec): min=146, max=564, avg=192.18, stdev=25.13 00:16:40.042 lat (usec): min=160, max=578, avg=208.55, stdev=25.77 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:16:40.042 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:16:40.042 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 235], 00:16:40.042 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 343], 99.95th=[ 433], 00:16:40.042 | 99.99th=[ 562] 00:16:40.042 write: IOPS=2644, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:16:40.042 slat (nsec): min=20000, max=99935, avg=25841.14, stdev=7013.56 00:16:40.042 clat (usec): min=104, max=593, avg=146.68, stdev=23.13 00:16:40.042 lat (usec): min=125, max=615, avg=172.52, stdev=25.28 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 113], 5.00th=[ 119], 10.00th=[ 124], 20.00th=[ 130], 00:16:40.042 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 149], 00:16:40.042 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:16:40.042 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 281], 99.95th=[ 420], 00:16:40.042 | 99.99th=[ 594] 00:16:40.042 bw ( KiB/s): min=12288, max=12288, per=35.10%, avg=12288.00, stdev= 0.00, samples=1 00:16:40.042 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:40.042 lat (usec) : 250=99.00%, 500=0.96%, 750=0.04% 00:16:40.042 cpu : usr=1.60%, sys=8.40%, ctx=5208, majf=0, minf=15 00:16:40.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 issued rwts: total=2560,2647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.042 job3: (groupid=0, jobs=1): err= 0: pid=86878: Fri Oct 4 06:34:32 2024 00:16:40.042 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:40.042 slat (usec): min=10, max=355, avg=15.84, stdev= 8.94 00:16:40.042 clat (usec): min=152, max=3556, avg=250.20, stdev=130.60 00:16:40.042 lat (usec): min=164, max=3570, avg=266.04, stdev=130.26 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:16:40.042 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 217], 00:16:40.042 | 70.00th=[ 227], 80.00th=[ 243], 90.00th=[ 465], 95.00th=[ 498], 00:16:40.042 | 99.00th=[ 578], 99.50th=[ 627], 99.90th=[ 742], 99.95th=[ 865], 00:16:40.042 | 99.99th=[ 3556] 00:16:40.042 write: IOPS=2313, BW=9255KiB/s (9477kB/s)(9264KiB/1001msec); 0 zone resets 00:16:40.042 slat (nsec): min=14888, max=99597, avg=25206.50, stdev=8098.82 00:16:40.042 clat (usec): min=103, max=1796, avg=167.67, stdev=60.98 00:16:40.042 lat (usec): min=131, max=1817, avg=192.88, stdev=60.40 00:16:40.042 clat percentiles (usec): 00:16:40.042 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 139], 00:16:40.042 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 161], 00:16:40.042 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 239], 95.00th=[ 269], 00:16:40.042 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 420], 99.95th=[ 1516], 00:16:40.042 | 99.99th=[ 1795] 00:16:40.042 bw ( KiB/s): min=11944, max=11944, per=34.11%, avg=11944.00, stdev= 0.00, samples=1 00:16:40.042 iops : min= 2986, max= 2986, avg=2986.00, stdev= 0.00, samples=1 00:16:40.042 lat (usec) : 250=87.03%, 500=10.75%, 750=2.13%, 1000=0.02% 00:16:40.042 lat (msec) : 2=0.05%, 4=0.02% 00:16:40.042 cpu : usr=1.80%, sys=6.90%, ctx=4373, majf=0, minf=3 00:16:40.042 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:40.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.042 issued rwts: total=2048,2316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.042 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:40.042 00:16:40.042 Run status group 0 (all jobs): 00:16:40.042 READ: bw=30.0MiB/s (31.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.1MiB (31.5MB), run=1001-1001msec 00:16:40.042 WRITE: bw=34.2MiB/s (35.9MB/s), 6997KiB/s-10.3MiB/s (7165kB/s-10.8MB/s), io=34.2MiB (35.9MB), run=1001-1001msec 00:16:40.042 00:16:40.042 Disk stats (read/write): 00:16:40.042 nvme0n1: ios=1428/1536, merge=0/0, ticks=484/351, in_queue=835, util=88.48% 00:16:40.042 nvme0n2: ios=1581/1538, merge=0/0, ticks=467/365, in_queue=832, util=88.04% 00:16:40.042 nvme0n3: ios=2048/2419, merge=0/0, ticks=408/378, in_queue=786, util=89.14% 00:16:40.042 nvme0n4: ios=1917/2048, merge=0/0, ticks=459/331, in_queue=790, util=89.70% 00:16:40.042 06:34:32 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:40.042 [global] 00:16:40.042 thread=1 00:16:40.042 invalidate=1 00:16:40.042 rw=randwrite 00:16:40.042 time_based=1 00:16:40.042 runtime=1 00:16:40.042 ioengine=libaio 00:16:40.042 direct=1 00:16:40.042 bs=4096 00:16:40.042 iodepth=1 00:16:40.042 norandommap=0 00:16:40.042 numjobs=1 00:16:40.042 00:16:40.042 verify_dump=1 00:16:40.042 verify_backlog=512 00:16:40.042 verify_state_save=0 00:16:40.042 do_verify=1 00:16:40.042 verify=crc32c-intel 00:16:40.042 [job0] 00:16:40.042 filename=/dev/nvme0n1 00:16:40.042 [job1] 00:16:40.042 filename=/dev/nvme0n2 00:16:40.042 [job2] 00:16:40.042 filename=/dev/nvme0n3 00:16:40.042 [job3] 00:16:40.042 filename=/dev/nvme0n4 00:16:40.042 Could not set queue depth (nvme0n1) 00:16:40.042 Could not set queue depth (nvme0n2) 00:16:40.042 Could not set queue depth (nvme0n3) 00:16:40.042 Could not set queue depth (nvme0n4) 00:16:40.301 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.302 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.302 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.302 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:40.302 fio-3.35 00:16:40.302 Starting 4 threads 00:16:41.677 00:16:41.677 job0: (groupid=0, jobs=1): err= 0: pid=86931: Fri Oct 4 06:34:33 2024 00:16:41.677 read: IOPS=1361, BW=5447KiB/s (5577kB/s)(5452KiB/1001msec) 00:16:41.677 slat (nsec): min=10414, max=82559, avg=17065.58, stdev=6449.64 00:16:41.677 clat (usec): min=136, max=3242, avg=366.98, stdev=114.52 00:16:41.677 lat (usec): min=153, max=3262, avg=384.04, stdev=114.74 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 153], 5.00th=[ 192], 10.00th=[ 255], 20.00th=[ 314], 00:16:41.677 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 379], 00:16:41.677 | 70.00th=[ 400], 80.00th=[ 433], 90.00th=[ 478], 95.00th=[ 502], 00:16:41.677 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 611], 99.95th=[ 3228], 00:16:41.677 | 99.99th=[ 3228] 00:16:41.677 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:41.677 slat (nsec): min=11977, max=84154, avg=23092.35, stdev=7558.44 00:16:41.677 clat (usec): min=109, max=7479, avg=283.43, stdev=197.65 00:16:41.677 lat (usec): min=133, max=7499, avg=306.52, stdev=197.77 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 165], 5.00th=[ 194], 10.00th=[ 212], 20.00th=[ 231], 00:16:41.677 | 30.00th=[ 245], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 285], 00:16:41.677 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 375], 00:16:41.677 | 99.00th=[ 429], 99.50th=[ 469], 99.90th=[ 1663], 99.95th=[ 7504], 00:16:41.677 | 99.99th=[ 7504] 00:16:41.677 bw ( KiB/s): min= 7944, max= 7944, per=28.19%, avg=7944.00, stdev= 0.00, samples=1 00:16:41.677 iops : min= 1986, max= 1986, avg=1986.00, stdev= 0.00, samples=1 00:16:41.677 lat (usec) : 250=22.56%, 500=74.75%, 750=2.55% 00:16:41.677 lat (msec) : 2=0.07%, 4=0.03%, 10=0.03% 00:16:41.677 cpu : usr=1.20%, sys=4.70%, ctx=2900, majf=0, minf=9 00:16:41.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 issued rwts: total=1363,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.677 job1: (groupid=0, jobs=1): err= 0: pid=86932: Fri Oct 4 06:34:33 2024 00:16:41.677 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:41.677 slat (usec): min=9, max=147, avg=13.62, stdev= 5.98 00:16:41.677 clat (usec): min=67, max=2842, avg=263.06, stdev=115.05 00:16:41.677 lat (usec): min=144, max=2856, avg=276.69, stdev=114.48 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 176], 00:16:41.677 | 30.00th=[ 190], 40.00th=[ 206], 50.00th=[ 235], 60.00th=[ 297], 00:16:41.677 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 375], 95.00th=[ 400], 00:16:41.677 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 685], 99.95th=[ 2245], 00:16:41.677 | 99.99th=[ 2835] 00:16:41.677 write: IOPS=2090, BW=8364KiB/s (8564kB/s)(8372KiB/1001msec); 0 zone resets 00:16:41.677 slat (usec): min=11, max=103, avg=21.89, stdev= 7.44 00:16:41.677 clat (usec): min=93, max=432, avg=181.70, stdev=62.98 00:16:41.677 lat (usec): min=112, max=451, avg=203.59, stdev=63.10 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 131], 00:16:41.677 | 30.00th=[ 141], 40.00th=[ 151], 50.00th=[ 163], 60.00th=[ 174], 00:16:41.677 | 70.00th=[ 198], 80.00th=[ 241], 90.00th=[ 285], 95.00th=[ 310], 00:16:41.677 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 396], 99.95th=[ 416], 00:16:41.677 | 99.99th=[ 433] 00:16:41.677 bw ( KiB/s): min=12288, max=12288, per=43.61%, avg=12288.00, stdev= 0.00, samples=1 00:16:41.677 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:41.677 lat (usec) : 100=0.14%, 250=67.38%, 500=32.12%, 750=0.31% 00:16:41.677 lat (msec) : 4=0.05% 00:16:41.677 cpu : usr=1.30%, sys=5.90%, ctx=4146, majf=0, minf=11 00:16:41.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 issued rwts: total=2048,2093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.677 job2: (groupid=0, jobs=1): err= 0: pid=86933: Fri Oct 4 06:34:33 2024 00:16:41.677 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:41.677 slat (usec): min=11, max=164, avg=19.24, stdev= 8.37 00:16:41.677 clat (usec): min=199, max=552, avg=305.23, stdev=82.77 00:16:41.677 lat (usec): min=213, max=595, avg=324.47, stdev=82.24 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 241], 00:16:41.677 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 285], 00:16:41.677 | 70.00th=[ 343], 80.00th=[ 379], 90.00th=[ 449], 95.00th=[ 486], 00:16:41.677 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 545], 99.95th=[ 553], 00:16:41.677 | 99.99th=[ 553] 00:16:41.677 write: IOPS=1885, BW=7540KiB/s (7721kB/s)(7548KiB/1001msec); 0 zone resets 00:16:41.677 slat (usec): min=11, max=103, avg=27.03, stdev=10.22 00:16:41.677 clat (usec): min=142, max=1814, avg=235.00, stdev=68.92 00:16:41.677 lat (usec): min=164, max=1839, avg=262.02, stdev=68.34 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 188], 00:16:41.677 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 229], 00:16:41.677 | 70.00th=[ 249], 80.00th=[ 277], 90.00th=[ 334], 95.00th=[ 355], 00:16:41.677 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 437], 99.95th=[ 1811], 00:16:41.677 | 99.99th=[ 1811] 00:16:41.677 bw ( KiB/s): min= 8192, max= 8192, per=29.07%, avg=8192.00, stdev= 0.00, samples=1 00:16:41.677 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:41.677 lat (usec) : 250=52.67%, 500=45.87%, 750=1.43% 00:16:41.677 lat (msec) : 2=0.03% 00:16:41.677 cpu : usr=1.80%, sys=5.70%, ctx=3423, majf=0, minf=7 00:16:41.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 issued rwts: total=1536,1887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.677 job3: (groupid=0, jobs=1): err= 0: pid=86934: Fri Oct 4 06:34:33 2024 00:16:41.677 read: IOPS=1448, BW=5794KiB/s (5933kB/s)(5800KiB/1001msec) 00:16:41.677 slat (nsec): min=8384, max=64226, avg=15252.19, stdev=5334.98 00:16:41.677 clat (usec): min=168, max=2306, avg=351.86, stdev=71.81 00:16:41.677 lat (usec): min=189, max=2318, avg=367.11, stdev=72.12 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 314], 00:16:41.677 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:16:41.677 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 433], 00:16:41.677 | 99.00th=[ 545], 99.50th=[ 594], 99.90th=[ 635], 99.95th=[ 2311], 00:16:41.677 | 99.99th=[ 2311] 00:16:41.677 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:41.677 slat (nsec): min=11162, max=79684, avg=22962.03, stdev=7331.10 00:16:41.677 clat (usec): min=107, max=7378, avg=277.90, stdev=192.10 00:16:41.677 lat (usec): min=128, max=7398, avg=300.86, stdev=192.22 00:16:41.677 clat percentiles (usec): 00:16:41.677 | 1.00th=[ 153], 5.00th=[ 200], 10.00th=[ 219], 20.00th=[ 237], 00:16:41.677 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 281], 00:16:41.677 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 355], 00:16:41.677 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 1549], 99.95th=[ 7373], 00:16:41.677 | 99.99th=[ 7373] 00:16:41.677 bw ( KiB/s): min= 8008, max= 8008, per=28.42%, avg=8008.00, stdev= 0.00, samples=1 00:16:41.677 iops : min= 2002, max= 2002, avg=2002.00, stdev= 0.00, samples=1 00:16:41.677 lat (usec) : 250=16.51%, 500=82.82%, 750=0.54% 00:16:41.677 lat (msec) : 2=0.07%, 4=0.03%, 10=0.03% 00:16:41.677 cpu : usr=1.60%, sys=4.10%, ctx=2987, majf=0, minf=17 00:16:41.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.677 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.677 00:16:41.677 Run status group 0 (all jobs): 00:16:41.677 READ: bw=25.0MiB/s (26.2MB/s), 5447KiB/s-8184KiB/s (5577kB/s-8380kB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:16:41.677 WRITE: bw=27.5MiB/s (28.9MB/s), 6138KiB/s-8364KiB/s (6285kB/s-8564kB/s), io=27.5MiB (28.9MB), run=1001-1001msec 00:16:41.677 00:16:41.677 Disk stats (read/write): 00:16:41.677 nvme0n1: ios=1097/1536, merge=0/0, ticks=396/428, in_queue=824, util=88.28% 00:16:41.677 nvme0n2: ios=1756/2048, merge=0/0, ticks=459/388, in_queue=847, util=89.08% 00:16:41.677 nvme0n3: ios=1465/1536, merge=0/0, ticks=448/363, in_queue=811, util=89.29% 00:16:41.677 nvme0n4: ios=1079/1536, merge=0/0, ticks=389/435, in_queue=824, util=89.53% 00:16:41.677 06:34:34 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:41.677 [global] 00:16:41.677 thread=1 00:16:41.677 invalidate=1 00:16:41.677 rw=write 00:16:41.677 time_based=1 00:16:41.677 runtime=1 00:16:41.677 ioengine=libaio 00:16:41.677 direct=1 00:16:41.677 bs=4096 00:16:41.677 iodepth=128 00:16:41.677 norandommap=0 00:16:41.677 numjobs=1 00:16:41.677 00:16:41.677 verify_dump=1 00:16:41.677 verify_backlog=512 00:16:41.677 verify_state_save=0 00:16:41.677 do_verify=1 00:16:41.677 verify=crc32c-intel 00:16:41.677 [job0] 00:16:41.677 filename=/dev/nvme0n1 00:16:41.677 [job1] 00:16:41.677 filename=/dev/nvme0n2 00:16:41.677 [job2] 00:16:41.677 filename=/dev/nvme0n3 00:16:41.677 [job3] 00:16:41.677 filename=/dev/nvme0n4 00:16:41.677 Could not set queue depth (nvme0n1) 00:16:41.677 Could not set queue depth (nvme0n2) 00:16:41.677 Could not set queue depth (nvme0n3) 00:16:41.677 Could not set queue depth (nvme0n4) 00:16:41.677 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.677 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.677 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.677 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:41.677 fio-3.35 00:16:41.677 Starting 4 threads 00:16:43.055 00:16:43.055 job0: (groupid=0, jobs=1): err= 0: pid=86989: Fri Oct 4 06:34:35 2024 00:16:43.055 read: IOPS=2003, BW=8016KiB/s (8208kB/s)(8072KiB/1007msec) 00:16:43.055 slat (usec): min=6, max=7746, avg=273.23, stdev=1072.26 00:16:43.055 clat (usec): min=4618, max=53799, avg=34175.65, stdev=9190.49 00:16:43.055 lat (usec): min=8310, max=53811, avg=34448.88, stdev=9183.96 00:16:43.055 clat percentiles (usec): 00:16:43.055 | 1.00th=[ 8717], 5.00th=[22152], 10.00th=[25297], 20.00th=[28181], 00:16:43.055 | 30.00th=[28705], 40.00th=[31327], 50.00th=[32113], 60.00th=[33424], 00:16:43.055 | 70.00th=[36963], 80.00th=[43254], 90.00th=[49021], 95.00th=[52167], 00:16:43.055 | 99.00th=[53216], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:16:43.055 | 99.99th=[53740] 00:16:43.055 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:16:43.055 slat (usec): min=18, max=8443, avg=210.79, stdev=1028.01 00:16:43.055 clat (usec): min=14395, max=37645, avg=27894.08, stdev=4916.07 00:16:43.055 lat (usec): min=17902, max=37672, avg=28104.87, stdev=4842.48 00:16:43.055 clat percentiles (usec): 00:16:43.056 | 1.00th=[17957], 5.00th=[21103], 10.00th=[21627], 20.00th=[22152], 00:16:43.056 | 30.00th=[24249], 40.00th=[26346], 50.00th=[28443], 60.00th=[30016], 00:16:43.056 | 70.00th=[31065], 80.00th=[32900], 90.00th=[34341], 95.00th=[35390], 00:16:43.056 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:16:43.056 | 99.99th=[37487] 00:16:43.056 bw ( KiB/s): min= 8192, max= 8208, per=21.96%, avg=8200.00, stdev=11.31, samples=2 00:16:43.056 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:16:43.056 lat (msec) : 10=0.71%, 20=1.92%, 50=92.94%, 100=4.43% 00:16:43.056 cpu : usr=2.18%, sys=6.55%, ctx=194, majf=0, minf=3 00:16:43.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:43.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:43.056 issued rwts: total=2018,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:43.056 job1: (groupid=0, jobs=1): err= 0: pid=86990: Fri Oct 4 06:34:35 2024 00:16:43.056 read: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec) 00:16:43.056 slat (usec): min=6, max=17199, avg=418.61, stdev=1898.66 00:16:43.056 clat (usec): min=3043, max=73458, avg=49251.82, stdev=16391.88 00:16:43.056 lat (usec): min=11247, max=73482, avg=49670.43, stdev=16389.67 00:16:43.056 clat percentiles (usec): 00:16:43.056 | 1.00th=[11600], 5.00th=[22938], 10.00th=[25035], 20.00th=[27919], 00:16:43.056 | 30.00th=[41157], 40.00th=[47973], 50.00th=[55313], 60.00th=[59507], 00:16:43.056 | 70.00th=[61604], 80.00th=[63177], 90.00th=[68682], 95.00th=[69731], 00:16:43.056 | 99.00th=[71828], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:16:43.056 | 99.99th=[73925] 00:16:43.056 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:16:43.056 slat (usec): min=19, max=10696, avg=222.73, stdev=1026.57 00:16:43.056 clat (usec): min=14948, max=63388, avg=32371.41, stdev=11245.85 00:16:43.056 lat (usec): min=18180, max=63413, avg=32594.14, stdev=11254.70 00:16:43.056 clat percentiles (usec): 00:16:43.056 | 1.00th=[18220], 5.00th=[19268], 10.00th=[19268], 20.00th=[19792], 00:16:43.056 | 30.00th=[25822], 40.00th=[27395], 50.00th=[28967], 60.00th=[36963], 00:16:43.056 | 70.00th=[40109], 80.00th=[43254], 90.00th=[45876], 95.00th=[53740], 00:16:43.056 | 99.00th=[59507], 99.50th=[60556], 99.90th=[63177], 99.95th=[63177], 00:16:43.056 | 99.99th=[63177] 00:16:43.056 bw ( KiB/s): min= 4096, max= 8208, per=16.48%, avg=6152.00, stdev=2907.62, samples=2 00:16:43.056 iops : min= 1024, max= 2052, avg=1538.00, stdev=726.91, samples=2 00:16:43.056 lat (msec) : 4=0.03%, 20=12.11%, 50=56.90%, 100=30.96% 00:16:43.056 cpu : usr=1.89%, sys=5.18%, ctx=182, majf=0, minf=10 00:16:43.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:16:43.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:43.056 issued rwts: total=1536,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:43.056 job2: (groupid=0, jobs=1): err= 0: pid=86991: Fri Oct 4 06:34:35 2024 00:16:43.056 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:16:43.056 slat (usec): min=6, max=8602, avg=146.30, stdev=730.67 00:16:43.056 clat (usec): min=12350, max=32246, avg=18881.40, stdev=2841.50 00:16:43.056 lat (usec): min=12377, max=32267, avg=19027.70, stdev=2886.73 00:16:43.056 clat percentiles (usec): 00:16:43.056 | 1.00th=[14353], 5.00th=[15533], 10.00th=[15926], 20.00th=[16450], 00:16:43.056 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18744], 60.00th=[19268], 00:16:43.056 | 70.00th=[19792], 80.00th=[20317], 90.00th=[21627], 95.00th=[24773], 00:16:43.056 | 99.00th=[28705], 99.50th=[31589], 99.90th=[32113], 99.95th=[32375], 00:16:43.056 | 99.99th=[32375] 00:16:43.056 write: IOPS=3133, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1008msec); 0 zone resets 00:16:43.056 slat (usec): min=14, max=9915, avg=165.34, stdev=808.49 00:16:43.056 clat (usec): min=6631, max=42638, avg=21913.38, stdev=6841.19 00:16:43.056 lat (usec): min=7677, max=42681, avg=22078.72, stdev=6908.91 00:16:43.056 clat percentiles (usec): 00:16:43.056 | 1.00th=[10028], 5.00th=[15270], 10.00th=[16188], 20.00th=[16581], 00:16:43.056 | 30.00th=[17171], 40.00th=[17695], 50.00th=[19268], 60.00th=[21627], 00:16:43.056 | 70.00th=[25035], 80.00th=[27657], 90.00th=[31327], 95.00th=[38011], 00:16:43.056 | 99.00th=[39584], 99.50th=[39584], 99.90th=[42730], 99.95th=[42730], 00:16:43.056 | 99.99th=[42730] 00:16:43.056 bw ( KiB/s): min=12288, max=12312, per=32.94%, avg=12300.00, stdev=16.97, samples=2 00:16:43.056 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:16:43.056 lat (msec) : 10=0.55%, 20=63.44%, 50=36.01% 00:16:43.056 cpu : usr=4.27%, sys=9.33%, ctx=322, majf=0, minf=1 00:16:43.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:43.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:43.056 issued rwts: total=3072,3159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:43.056 job3: (groupid=0, jobs=1): err= 0: pid=86992: Fri Oct 4 06:34:35 2024 00:16:43.056 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:16:43.056 slat (usec): min=6, max=11178, avg=198.78, stdev=947.80 00:16:43.056 clat (usec): min=15526, max=40913, avg=25347.93, stdev=3921.86 00:16:43.056 lat (usec): min=15555, max=40934, avg=25546.71, stdev=3995.27 00:16:43.056 clat percentiles (usec): 00:16:43.056 | 1.00th=[18744], 5.00th=[20317], 10.00th=[21103], 20.00th=[22414], 00:16:43.056 | 30.00th=[22938], 40.00th=[23725], 50.00th=[24511], 60.00th=[25035], 00:16:43.056 | 70.00th=[26870], 80.00th=[28443], 90.00th=[31851], 95.00th=[32375], 00:16:43.056 | 99.00th=[36439], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:16:43.056 | 99.99th=[41157] 00:16:43.056 write: IOPS=2644, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1008msec); 0 zone resets 00:16:43.056 slat (usec): min=11, max=7941, avg=175.98, stdev=834.17 00:16:43.056 clat (usec): min=5211, max=40890, avg=23338.25, stdev=4287.92 00:16:43.056 lat (usec): min=8230, max=40915, avg=23514.23, stdev=4366.52 00:16:43.056 clat percentiles (usec): 00:16:43.056 | 1.00th=[12125], 5.00th=[17957], 10.00th=[18744], 20.00th=[19792], 00:16:43.056 | 30.00th=[20841], 40.00th=[21627], 50.00th=[23200], 60.00th=[24249], 00:16:43.056 | 70.00th=[25297], 80.00th=[26608], 90.00th=[28443], 95.00th=[30278], 00:16:43.056 | 99.00th=[36439], 99.50th=[37487], 99.90th=[40109], 99.95th=[40109], 00:16:43.056 | 99.99th=[40633] 00:16:43.056 bw ( KiB/s): min= 9376, max=11104, per=27.43%, avg=10240.00, stdev=1221.88, samples=2 00:16:43.056 iops : min= 2344, max= 2776, avg=2560.00, stdev=305.47, samples=2 00:16:43.056 lat (msec) : 10=0.33%, 20=12.32%, 50=87.35% 00:16:43.056 cpu : usr=3.08%, sys=8.64%, ctx=239, majf=0, minf=1 00:16:43.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:43.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:43.056 issued rwts: total=2560,2666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:43.056 00:16:43.056 Run status group 0 (all jobs): 00:16:43.057 READ: bw=35.6MiB/s (37.3MB/s), 6120KiB/s-11.9MiB/s (6266kB/s-12.5MB/s), io=35.9MiB (37.6MB), run=1004-1008msec 00:16:43.057 WRITE: bw=36.5MiB/s (38.2MB/s), 6120KiB/s-12.2MiB/s (6266kB/s-12.8MB/s), io=36.8MiB (38.5MB), run=1004-1008msec 00:16:43.057 00:16:43.057 Disk stats (read/write): 00:16:43.057 nvme0n1: ios=1586/1945, merge=0/0, ticks=13890/11824, in_queue=25714, util=88.08% 00:16:43.057 nvme0n2: ios=1179/1536, merge=0/0, ticks=15214/10741, in_queue=25955, util=87.82% 00:16:43.057 nvme0n3: ios=2560/2611, merge=0/0, ticks=23546/27062, in_queue=50608, util=89.20% 00:16:43.057 nvme0n4: ios=2048/2399, merge=0/0, ticks=17048/16458, in_queue=33506, util=89.77% 00:16:43.057 06:34:35 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:43.057 [global] 00:16:43.057 thread=1 00:16:43.057 invalidate=1 00:16:43.057 rw=randwrite 00:16:43.057 time_based=1 00:16:43.057 runtime=1 00:16:43.057 ioengine=libaio 00:16:43.057 direct=1 00:16:43.057 bs=4096 00:16:43.057 iodepth=128 00:16:43.057 norandommap=0 00:16:43.057 numjobs=1 00:16:43.057 00:16:43.057 verify_dump=1 00:16:43.057 verify_backlog=512 00:16:43.057 verify_state_save=0 00:16:43.057 do_verify=1 00:16:43.057 verify=crc32c-intel 00:16:43.057 [job0] 00:16:43.057 filename=/dev/nvme0n1 00:16:43.057 [job1] 00:16:43.057 filename=/dev/nvme0n2 00:16:43.057 [job2] 00:16:43.057 filename=/dev/nvme0n3 00:16:43.057 [job3] 00:16:43.057 filename=/dev/nvme0n4 00:16:43.057 Could not set queue depth (nvme0n1) 00:16:43.057 Could not set queue depth (nvme0n2) 00:16:43.057 Could not set queue depth (nvme0n3) 00:16:43.057 Could not set queue depth (nvme0n4) 00:16:43.057 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:43.057 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:43.057 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:43.057 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:43.057 fio-3.35 00:16:43.057 Starting 4 threads 00:16:44.437 00:16:44.437 job0: (groupid=0, jobs=1): err= 0: pid=87051: Fri Oct 4 06:34:36 2024 00:16:44.437 read: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec) 00:16:44.437 slat (usec): min=7, max=11217, avg=187.63, stdev=989.78 00:16:44.437 clat (usec): min=11408, max=48936, avg=22776.88, stdev=6000.25 00:16:44.437 lat (usec): min=11440, max=48954, avg=22964.51, stdev=6090.91 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[12518], 5.00th=[15533], 10.00th=[16581], 20.00th=[17695], 00:16:44.437 | 30.00th=[17957], 40.00th=[18482], 50.00th=[23200], 60.00th=[24511], 00:16:44.437 | 70.00th=[25822], 80.00th=[28181], 90.00th=[31065], 95.00th=[31851], 00:16:44.437 | 99.00th=[40109], 99.50th=[41157], 99.90th=[49021], 99.95th=[49021], 00:16:44.437 | 99.99th=[49021] 00:16:44.437 write: IOPS=2444, BW=9779KiB/s (10.0MB/s)(9916KiB/1014msec); 0 zone resets 00:16:44.437 slat (usec): min=13, max=12868, avg=239.39, stdev=980.09 00:16:44.437 clat (usec): min=12960, max=60080, avg=32753.00, stdev=9013.72 00:16:44.437 lat (usec): min=14498, max=60118, avg=32992.40, stdev=9086.39 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[15533], 5.00th=[21365], 10.00th=[25560], 20.00th=[26346], 00:16:44.437 | 30.00th=[27395], 40.00th=[28443], 50.00th=[29230], 60.00th=[31327], 00:16:44.437 | 70.00th=[34866], 80.00th=[41681], 90.00th=[45876], 95.00th=[50594], 00:16:44.437 | 99.00th=[56886], 99.50th=[58983], 99.90th=[60031], 99.95th=[60031], 00:16:44.437 | 99.99th=[60031] 00:16:44.437 bw ( KiB/s): min= 8713, max=10112, per=24.06%, avg=9412.50, stdev=989.24, samples=2 00:16:44.437 iops : min= 2178, max= 2528, avg=2353.00, stdev=247.49, samples=2 00:16:44.437 lat (msec) : 20=22.93%, 50=73.85%, 100=3.23% 00:16:44.437 cpu : usr=2.17%, sys=8.49%, ctx=307, majf=0, minf=3 00:16:44.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:44.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.437 issued rwts: total=2048,2479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.437 job1: (groupid=0, jobs=1): err= 0: pid=87052: Fri Oct 4 06:34:36 2024 00:16:44.437 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:16:44.437 slat (usec): min=10, max=11008, avg=166.92, stdev=961.37 00:16:44.437 clat (usec): min=11163, max=38944, avg=20608.49, stdev=4472.33 00:16:44.437 lat (usec): min=11190, max=38981, avg=20775.41, stdev=4549.37 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[12780], 5.00th=[16188], 10.00th=[17433], 20.00th=[17695], 00:16:44.437 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18744], 60.00th=[19792], 00:16:44.437 | 70.00th=[21890], 80.00th=[24511], 90.00th=[27395], 95.00th=[30540], 00:16:44.437 | 99.00th=[34341], 99.50th=[35914], 99.90th=[35914], 99.95th=[36439], 00:16:44.437 | 99.99th=[39060] 00:16:44.437 write: IOPS=2655, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1015msec); 0 zone resets 00:16:44.437 slat (usec): min=13, max=12551, avg=203.50, stdev=885.15 00:16:44.437 clat (usec): min=10829, max=53814, avg=28032.53, stdev=9151.01 00:16:44.437 lat (usec): min=10854, max=53841, avg=28236.03, stdev=9231.03 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[14353], 5.00th=[15926], 10.00th=[16712], 20.00th=[19268], 00:16:44.437 | 30.00th=[24773], 40.00th=[26346], 50.00th=[27132], 60.00th=[28443], 00:16:44.437 | 70.00th=[29754], 80.00th=[33817], 90.00th=[42206], 95.00th=[46400], 00:16:44.437 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:16:44.437 | 99.99th=[53740] 00:16:44.437 bw ( KiB/s): min= 8312, max=12288, per=26.33%, avg=10300.00, stdev=2811.46, samples=2 00:16:44.437 iops : min= 2078, max= 3072, avg=2575.00, stdev=702.86, samples=2 00:16:44.437 lat (msec) : 20=41.58%, 50=56.92%, 100=1.50% 00:16:44.437 cpu : usr=2.66%, sys=9.07%, ctx=300, majf=0, minf=2 00:16:44.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:44.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.437 issued rwts: total=2560,2695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.437 job2: (groupid=0, jobs=1): err= 0: pid=87053: Fri Oct 4 06:34:36 2024 00:16:44.437 read: IOPS=2476, BW=9905KiB/s (10.1MB/s)(9964KiB/1006msec) 00:16:44.437 slat (usec): min=6, max=16147, avg=173.37, stdev=1005.48 00:16:44.437 clat (usec): min=5027, max=66252, avg=19462.05, stdev=8450.97 00:16:44.437 lat (usec): min=5040, max=66269, avg=19635.42, stdev=8540.85 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[ 7701], 5.00th=[12125], 10.00th=[12911], 20.00th=[13566], 00:16:44.437 | 30.00th=[15008], 40.00th=[15270], 50.00th=[17695], 60.00th=[19268], 00:16:44.437 | 70.00th=[20579], 80.00th=[21103], 90.00th=[29754], 95.00th=[38536], 00:16:44.437 | 99.00th=[54789], 99.50th=[62653], 99.90th=[66323], 99.95th=[66323], 00:16:44.437 | 99.99th=[66323] 00:16:44.437 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:16:44.437 slat (usec): min=6, max=16947, avg=212.66, stdev=933.06 00:16:44.437 clat (usec): min=4174, max=66202, avg=30829.31, stdev=13534.76 00:16:44.437 lat (usec): min=4205, max=66214, avg=31041.96, stdev=13628.59 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[ 6652], 5.00th=[11731], 10.00th=[14353], 20.00th=[19530], 00:16:44.437 | 30.00th=[21103], 40.00th=[27919], 50.00th=[28705], 60.00th=[30540], 00:16:44.437 | 70.00th=[36439], 80.00th=[44303], 90.00th=[50070], 95.00th=[56361], 00:16:44.437 | 99.00th=[61604], 99.50th=[63177], 99.90th=[64750], 99.95th=[66323], 00:16:44.437 | 99.99th=[66323] 00:16:44.437 bw ( KiB/s): min= 9104, max=11398, per=26.21%, avg=10251.00, stdev=1622.10, samples=2 00:16:44.437 iops : min= 2276, max= 2849, avg=2562.50, stdev=405.17, samples=2 00:16:44.437 lat (msec) : 10=2.38%, 20=39.40%, 50=52.60%, 100=5.62% 00:16:44.437 cpu : usr=2.79%, sys=7.16%, ctx=341, majf=0, minf=1 00:16:44.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:44.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.437 issued rwts: total=2491,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.437 job3: (groupid=0, jobs=1): err= 0: pid=87054: Fri Oct 4 06:34:36 2024 00:16:44.437 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:16:44.437 slat (usec): min=5, max=9230, avg=150.46, stdev=817.17 00:16:44.437 clat (usec): min=11425, max=53195, avg=18570.80, stdev=3746.94 00:16:44.437 lat (usec): min=11445, max=53208, avg=18721.26, stdev=3819.94 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[12780], 5.00th=[14353], 10.00th=[15664], 20.00th=[16188], 00:16:44.437 | 30.00th=[16581], 40.00th=[16909], 50.00th=[18482], 60.00th=[18744], 00:16:44.437 | 70.00th=[19006], 80.00th=[20317], 90.00th=[22152], 95.00th=[24511], 00:16:44.437 | 99.00th=[28967], 99.50th=[33162], 99.90th=[53216], 99.95th=[53216], 00:16:44.437 | 99.99th=[53216] 00:16:44.437 write: IOPS=2169, BW=8677KiB/s (8885kB/s)(8764KiB/1010msec); 0 zone resets 00:16:44.437 slat (usec): min=12, max=33694, avg=306.81, stdev=1596.00 00:16:44.437 clat (usec): min=7521, max=88711, avg=40820.56, stdev=17087.49 00:16:44.437 lat (usec): min=11230, max=88808, avg=41127.37, stdev=17154.41 00:16:44.437 clat percentiles (usec): 00:16:44.437 | 1.00th=[14353], 5.00th=[19268], 10.00th=[25297], 20.00th=[28181], 00:16:44.437 | 30.00th=[28443], 40.00th=[30016], 50.00th=[32375], 60.00th=[41157], 00:16:44.437 | 70.00th=[44827], 80.00th=[60556], 90.00th=[69731], 95.00th=[73925], 00:16:44.437 | 99.00th=[76022], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:16:44.437 | 99.99th=[88605] 00:16:44.437 bw ( KiB/s): min= 8208, max= 8312, per=21.12%, avg=8260.00, stdev=73.54, samples=2 00:16:44.437 iops : min= 2052, max= 2078, avg=2065.00, stdev=18.38, samples=2 00:16:44.437 lat (msec) : 10=0.02%, 20=41.28%, 50=44.16%, 100=14.53% 00:16:44.437 cpu : usr=2.58%, sys=7.14%, ctx=287, majf=0, minf=11 00:16:44.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:44.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.437 issued rwts: total=2048,2191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.437 00:16:44.437 Run status group 0 (all jobs): 00:16:44.437 READ: bw=35.2MiB/s (36.9MB/s), 8079KiB/s-9.85MiB/s (8273kB/s-10.3MB/s), io=35.7MiB (37.5MB), run=1006-1015msec 00:16:44.437 WRITE: bw=38.2MiB/s (40.1MB/s), 8677KiB/s-10.4MiB/s (8885kB/s-10.9MB/s), io=38.8MiB (40.7MB), run=1006-1015msec 00:16:44.437 00:16:44.437 Disk stats (read/write): 00:16:44.437 nvme0n1: ios=1837/2048, merge=0/0, ticks=20530/31273, in_queue=51803, util=88.88% 00:16:44.437 nvme0n2: ios=2097/2367, merge=0/0, ticks=21085/29911, in_queue=50996, util=89.37% 00:16:44.437 nvme0n3: ios=2048/2327, merge=0/0, ticks=36554/67322, in_queue=103876, util=89.05% 00:16:44.438 nvme0n4: ios=1536/1975, merge=0/0, ticks=13734/38239, in_queue=51973, util=89.60% 00:16:44.438 06:34:36 -- target/fio.sh@55 -- # sync 00:16:44.438 06:34:36 -- target/fio.sh@59 -- # fio_pid=87076 00:16:44.438 06:34:36 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:44.438 06:34:36 -- target/fio.sh@61 -- # sleep 3 00:16:44.438 [global] 00:16:44.438 thread=1 00:16:44.438 invalidate=1 00:16:44.438 rw=read 00:16:44.438 time_based=1 00:16:44.438 runtime=10 00:16:44.438 ioengine=libaio 00:16:44.438 direct=1 00:16:44.438 bs=4096 00:16:44.438 iodepth=1 00:16:44.438 norandommap=1 00:16:44.438 numjobs=1 00:16:44.438 00:16:44.438 [job0] 00:16:44.438 filename=/dev/nvme0n1 00:16:44.438 [job1] 00:16:44.438 filename=/dev/nvme0n2 00:16:44.438 [job2] 00:16:44.438 filename=/dev/nvme0n3 00:16:44.438 [job3] 00:16:44.438 filename=/dev/nvme0n4 00:16:44.438 Could not set queue depth (nvme0n1) 00:16:44.438 Could not set queue depth (nvme0n2) 00:16:44.438 Could not set queue depth (nvme0n3) 00:16:44.438 Could not set queue depth (nvme0n4) 00:16:44.696 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.696 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.696 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.696 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.696 fio-3.35 00:16:44.696 Starting 4 threads 00:16:47.994 06:34:39 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:47.994 fio: pid=87120, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:47.994 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43573248, buflen=4096 00:16:47.994 06:34:40 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:47.994 fio: pid=87119, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:47.994 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43356160, buflen=4096 00:16:47.994 06:34:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:47.994 06:34:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:48.253 fio: pid=87117, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:48.253 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=38408192, buflen=4096 00:16:48.253 06:34:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:48.253 06:34:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:48.512 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44199936, buflen=4096 00:16:48.512 fio: pid=87118, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:48.512 00:16:48.512 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87117: Fri Oct 4 06:34:41 2024 00:16:48.512 read: IOPS=2690, BW=10.5MiB/s (11.0MB/s)(36.6MiB/3485msec) 00:16:48.512 slat (usec): min=12, max=18684, avg=21.87, stdev=236.21 00:16:48.512 clat (usec): min=146, max=2577, avg=348.05, stdev=64.55 00:16:48.512 lat (usec): min=159, max=19004, avg=369.92, stdev=244.90 00:16:48.512 clat percentiles (usec): 00:16:48.512 | 1.00th=[ 186], 5.00th=[ 258], 10.00th=[ 277], 20.00th=[ 318], 00:16:48.512 | 30.00th=[ 330], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 363], 00:16:48.512 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 424], 00:16:48.512 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 816], 99.95th=[ 1254], 00:16:48.512 | 99.99th=[ 2573] 00:16:48.512 bw ( KiB/s): min=10272, max=10736, per=23.76%, avg=10500.00, stdev=197.57, samples=6 00:16:48.512 iops : min= 2568, max= 2684, avg=2625.00, stdev=49.39, samples=6 00:16:48.512 lat (usec) : 250=3.29%, 500=96.28%, 750=0.30%, 1000=0.03% 00:16:48.512 lat (msec) : 2=0.06%, 4=0.02% 00:16:48.512 cpu : usr=0.75%, sys=3.67%, ctx=9383, majf=0, minf=1 00:16:48.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.512 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.512 issued rwts: total=9378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.512 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87118: Fri Oct 4 06:34:41 2024 00:16:48.512 read: IOPS=2880, BW=11.3MiB/s (11.8MB/s)(42.2MiB/3746msec) 00:16:48.512 slat (usec): min=11, max=9650, avg=27.64, stdev=187.74 00:16:48.512 clat (usec): min=114, max=4804, avg=317.35, stdev=108.57 00:16:48.512 lat (usec): min=145, max=9953, avg=344.99, stdev=217.12 00:16:48.512 clat percentiles (usec): 00:16:48.512 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 169], 20.00th=[ 265], 00:16:48.512 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 347], 00:16:48.512 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 408], 00:16:48.512 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 963], 99.95th=[ 2180], 00:16:48.512 | 99.99th=[ 4015] 00:16:48.512 bw ( KiB/s): min=10192, max=14538, per=25.00%, avg=11051.71, stdev=1550.42, samples=7 00:16:48.512 iops : min= 2548, max= 3634, avg=2762.86, stdev=387.42, samples=7 00:16:48.512 lat (usec) : 250=16.03%, 500=83.71%, 750=0.12%, 1000=0.05% 00:16:48.512 lat (msec) : 2=0.03%, 4=0.05%, 10=0.01% 00:16:48.512 cpu : usr=1.23%, sys=5.53%, ctx=10811, majf=0, minf=2 00:16:48.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.512 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.512 issued rwts: total=10792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.512 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87119: Fri Oct 4 06:34:41 2024 00:16:48.512 read: IOPS=3324, BW=13.0MiB/s (13.6MB/s)(41.3MiB/3184msec) 00:16:48.512 slat (usec): min=8, max=10614, avg=18.35, stdev=126.58 00:16:48.512 clat (usec): min=158, max=46364, avg=280.76, stdev=636.88 00:16:48.512 lat (usec): min=175, max=46398, avg=299.11, stdev=649.59 00:16:48.512 clat percentiles (usec): 00:16:48.512 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:16:48.512 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 251], 00:16:48.512 | 70.00th=[ 269], 80.00th=[ 343], 90.00th=[ 400], 95.00th=[ 433], 00:16:48.512 | 99.00th=[ 537], 99.50th=[ 594], 99.90th=[ 1516], 99.95th=[ 2802], 00:16:48.512 | 99.99th=[45351] 00:16:48.512 bw ( KiB/s): min= 9512, max=16016, per=30.45%, avg=13457.33, stdev=2955.29, samples=6 00:16:48.512 iops : min= 2378, max= 4004, avg=3364.33, stdev=738.82, samples=6 00:16:48.512 lat (usec) : 250=59.78%, 500=38.74%, 750=1.20%, 1000=0.10% 00:16:48.512 lat (msec) : 2=0.10%, 4=0.04%, 10=0.01%, 50=0.02% 00:16:48.512 cpu : usr=1.07%, sys=4.65%, ctx=10588, majf=0, minf=2 00:16:48.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.512 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.512 issued rwts: total=10586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.512 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87120: Fri Oct 4 06:34:41 2024 00:16:48.512 read: IOPS=3647, BW=14.2MiB/s (14.9MB/s)(41.6MiB/2917msec) 00:16:48.512 slat (nsec): min=8893, max=98072, avg=14307.39, stdev=4079.46 00:16:48.512 clat (usec): min=138, max=4191, avg=258.32, stdev=103.86 00:16:48.512 lat (usec): min=152, max=4204, avg=272.62, stdev=104.04 00:16:48.512 clat percentiles (usec): 00:16:48.512 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 198], 00:16:48.513 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 239], 00:16:48.513 | 70.00th=[ 253], 80.00th=[ 334], 90.00th=[ 400], 95.00th=[ 429], 00:16:48.513 | 99.00th=[ 519], 99.50th=[ 553], 99.90th=[ 693], 99.95th=[ 1205], 00:16:48.513 | 99.99th=[ 3589] 00:16:48.513 bw ( KiB/s): min= 9872, max=17360, per=35.03%, avg=15483.20, stdev=3152.23, samples=5 00:16:48.513 iops : min= 2468, max= 4340, avg=3870.80, stdev=788.06, samples=5 00:16:48.513 lat (usec) : 250=68.43%, 500=30.41%, 750=1.06%, 1000=0.04% 00:16:48.513 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 00:16:48.513 cpu : usr=0.89%, sys=4.49%, ctx=10640, majf=0, minf=2 00:16:48.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.513 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.513 issued rwts: total=10639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.513 00:16:48.513 Run status group 0 (all jobs): 00:16:48.513 READ: bw=43.2MiB/s (45.3MB/s), 10.5MiB/s-14.2MiB/s (11.0MB/s-14.9MB/s), io=162MiB (170MB), run=2917-3746msec 00:16:48.513 00:16:48.513 Disk stats (read/write): 00:16:48.513 nvme0n1: ios=8988/0, merge=0/0, ticks=3218/0, in_queue=3218, util=95.25% 00:16:48.513 nvme0n2: ios=10082/0, merge=0/0, ticks=3359/0, in_queue=3359, util=95.56% 00:16:48.513 nvme0n3: ios=10411/0, merge=0/0, ticks=2901/0, in_queue=2901, util=96.12% 00:16:48.513 nvme0n4: ios=10532/0, merge=0/0, ticks=2714/0, in_queue=2714, util=96.59% 00:16:48.513 06:34:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:48.513 06:34:41 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:48.771 06:34:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:48.771 06:34:41 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:49.029 06:34:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:49.029 06:34:41 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:49.287 06:34:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:49.287 06:34:41 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:49.546 06:34:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:49.546 06:34:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:49.804 06:34:42 -- target/fio.sh@69 -- # fio_status=0 00:16:49.804 06:34:42 -- target/fio.sh@70 -- # wait 87076 00:16:49.804 06:34:42 -- target/fio.sh@70 -- # fio_status=4 00:16:49.804 06:34:42 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.804 06:34:42 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.804 06:34:42 -- common/autotest_common.sh@1198 -- # local i=0 00:16:49.804 06:34:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.804 06:34:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:49.804 06:34:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:49.804 06:34:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.804 06:34:42 -- common/autotest_common.sh@1210 -- # return 0 00:16:49.804 06:34:42 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:49.804 nvmf hotplug test: fio failed as expected 00:16:49.804 06:34:42 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:49.804 06:34:42 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.062 06:34:42 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:50.062 06:34:42 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:50.062 06:34:42 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:50.062 06:34:42 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:50.062 06:34:42 -- target/fio.sh@91 -- # nvmftestfini 00:16:50.062 06:34:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:50.062 06:34:42 -- nvmf/common.sh@116 -- # sync 00:16:50.062 06:34:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:50.062 06:34:42 -- nvmf/common.sh@119 -- # set +e 00:16:50.062 06:34:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:50.062 06:34:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:50.321 rmmod nvme_tcp 00:16:50.321 rmmod nvme_fabrics 00:16:50.321 rmmod nvme_keyring 00:16:50.321 06:34:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:50.321 06:34:42 -- nvmf/common.sh@123 -- # set -e 00:16:50.321 06:34:42 -- nvmf/common.sh@124 -- # return 0 00:16:50.321 06:34:42 -- nvmf/common.sh@477 -- # '[' -n 86576 ']' 00:16:50.321 06:34:42 -- nvmf/common.sh@478 -- # killprocess 86576 00:16:50.321 06:34:42 -- common/autotest_common.sh@926 -- # '[' -z 86576 ']' 00:16:50.321 06:34:42 -- common/autotest_common.sh@930 -- # kill -0 86576 00:16:50.321 06:34:42 -- common/autotest_common.sh@931 -- # uname 00:16:50.321 06:34:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.321 06:34:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86576 00:16:50.321 06:34:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:50.321 killing process with pid 86576 00:16:50.321 06:34:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:50.321 06:34:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86576' 00:16:50.321 06:34:42 -- common/autotest_common.sh@945 -- # kill 86576 00:16:50.321 06:34:42 -- common/autotest_common.sh@950 -- # wait 86576 00:16:50.579 06:34:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:50.579 06:34:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:50.579 06:34:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:50.579 06:34:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.579 06:34:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:50.579 06:34:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.579 06:34:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.579 06:34:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.579 06:34:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:50.579 00:16:50.579 real 0m19.774s 00:16:50.579 user 1m16.396s 00:16:50.579 sys 0m7.847s 00:16:50.579 06:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.579 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:16:50.579 ************************************ 00:16:50.579 END TEST nvmf_fio_target 00:16:50.579 ************************************ 00:16:50.579 06:34:43 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:50.579 06:34:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:50.579 06:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:50.579 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:16:50.579 ************************************ 00:16:50.579 START TEST nvmf_bdevio 00:16:50.579 ************************************ 00:16:50.579 06:34:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:50.579 * Looking for test storage... 00:16:50.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:50.579 06:34:43 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.579 06:34:43 -- nvmf/common.sh@7 -- # uname -s 00:16:50.839 06:34:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.839 06:34:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.839 06:34:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.839 06:34:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.839 06:34:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.839 06:34:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.839 06:34:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.839 06:34:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.839 06:34:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.839 06:34:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.839 06:34:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:50.839 06:34:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:50.839 06:34:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.839 06:34:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.839 06:34:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.839 06:34:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.839 06:34:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.839 06:34:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.839 06:34:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.839 06:34:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.839 06:34:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.839 06:34:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.839 06:34:43 -- paths/export.sh@5 -- # export PATH 00:16:50.839 06:34:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.839 06:34:43 -- nvmf/common.sh@46 -- # : 0 00:16:50.839 06:34:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:50.839 06:34:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:50.839 06:34:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:50.839 06:34:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.839 06:34:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.839 06:34:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:50.839 06:34:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:50.839 06:34:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:50.839 06:34:43 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.839 06:34:43 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.839 06:34:43 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:50.839 06:34:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:50.839 06:34:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.839 06:34:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:50.839 06:34:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:50.839 06:34:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:50.839 06:34:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.839 06:34:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.839 06:34:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.839 06:34:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:50.839 06:34:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:50.839 06:34:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:50.839 06:34:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:50.839 06:34:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:50.839 06:34:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:50.839 06:34:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.839 06:34:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.839 06:34:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.839 06:34:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:50.839 06:34:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.839 06:34:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.839 06:34:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.839 06:34:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.839 06:34:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.839 06:34:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.839 06:34:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.839 06:34:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.839 06:34:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:50.839 06:34:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:50.839 Cannot find device "nvmf_tgt_br" 00:16:50.839 06:34:43 -- nvmf/common.sh@154 -- # true 00:16:50.839 06:34:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.839 Cannot find device "nvmf_tgt_br2" 00:16:50.839 06:34:43 -- nvmf/common.sh@155 -- # true 00:16:50.839 06:34:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:50.839 06:34:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:50.839 Cannot find device "nvmf_tgt_br" 00:16:50.839 06:34:43 -- nvmf/common.sh@157 -- # true 00:16:50.839 06:34:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:50.839 Cannot find device "nvmf_tgt_br2" 00:16:50.839 06:34:43 -- nvmf/common.sh@158 -- # true 00:16:50.839 06:34:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:50.839 06:34:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:50.839 06:34:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.839 06:34:43 -- nvmf/common.sh@161 -- # true 00:16:50.839 06:34:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.839 06:34:43 -- nvmf/common.sh@162 -- # true 00:16:50.839 06:34:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.839 06:34:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.839 06:34:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.839 06:34:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.839 06:34:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.839 06:34:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.839 06:34:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.839 06:34:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.839 06:34:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.839 06:34:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:50.839 06:34:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:50.839 06:34:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:50.839 06:34:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:51.098 06:34:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.098 06:34:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.098 06:34:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.098 06:34:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:51.098 06:34:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:51.098 06:34:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.098 06:34:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.098 06:34:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.098 06:34:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.098 06:34:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.098 06:34:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:51.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:51.098 00:16:51.098 --- 10.0.0.2 ping statistics --- 00:16:51.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.098 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:51.098 06:34:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:51.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:51.098 00:16:51.098 --- 10.0.0.3 ping statistics --- 00:16:51.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.098 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:51.098 06:34:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:51.098 00:16:51.098 --- 10.0.0.1 ping statistics --- 00:16:51.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.098 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:51.098 06:34:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.098 06:34:43 -- nvmf/common.sh@421 -- # return 0 00:16:51.098 06:34:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:51.098 06:34:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.098 06:34:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:51.098 06:34:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:51.098 06:34:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.098 06:34:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:51.098 06:34:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:51.098 06:34:43 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:51.098 06:34:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:51.098 06:34:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:51.098 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:16:51.098 06:34:43 -- nvmf/common.sh@469 -- # nvmfpid=87443 00:16:51.098 06:34:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:51.098 06:34:43 -- nvmf/common.sh@470 -- # waitforlisten 87443 00:16:51.098 06:34:43 -- common/autotest_common.sh@819 -- # '[' -z 87443 ']' 00:16:51.098 06:34:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.098 06:34:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.098 06:34:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.098 06:34:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.098 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:16:51.099 [2024-10-04 06:34:43.676234] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:51.099 [2024-10-04 06:34:43.676325] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.357 [2024-10-04 06:34:43.815217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.357 [2024-10-04 06:34:43.880459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.357 [2024-10-04 06:34:43.880604] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.357 [2024-10-04 06:34:43.880616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.357 [2024-10-04 06:34:43.880624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.357 [2024-10-04 06:34:43.880764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:51.357 [2024-10-04 06:34:43.880889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:51.357 [2024-10-04 06:34:43.881004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:51.357 [2024-10-04 06:34:43.881008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.294 06:34:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.294 06:34:44 -- common/autotest_common.sh@852 -- # return 0 00:16:52.294 06:34:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.294 06:34:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:52.294 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 06:34:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.294 06:34:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.294 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.294 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 [2024-10-04 06:34:44.748379] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.294 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.294 06:34:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:52.294 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.294 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 Malloc0 00:16:52.294 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.294 06:34:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:52.294 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.294 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.294 06:34:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:52.294 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.294 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.294 06:34:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.294 06:34:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.294 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 [2024-10-04 06:34:44.826753] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.294 06:34:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.294 06:34:44 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:52.294 06:34:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:52.294 06:34:44 -- nvmf/common.sh@520 -- # config=() 00:16:52.294 06:34:44 -- nvmf/common.sh@520 -- # local subsystem config 00:16:52.294 06:34:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:52.294 06:34:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:52.294 { 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme$subsystem", 00:16:52.294 "trtype": "$TEST_TRANSPORT", 00:16:52.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "$NVMF_PORT", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.294 "hdgst": ${hdgst:-false}, 00:16:52.294 "ddgst": ${ddgst:-false} 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 } 00:16:52.294 EOF 00:16:52.294 )") 00:16:52.294 06:34:44 -- nvmf/common.sh@542 -- # cat 00:16:52.294 06:34:44 -- nvmf/common.sh@544 -- # jq . 00:16:52.294 06:34:44 -- nvmf/common.sh@545 -- # IFS=, 00:16:52.294 06:34:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme1", 00:16:52.295 "trtype": "tcp", 00:16:52.295 "traddr": "10.0.0.2", 00:16:52.295 "adrfam": "ipv4", 00:16:52.295 "trsvcid": "4420", 00:16:52.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.295 "hdgst": false, 00:16:52.295 "ddgst": false 00:16:52.295 }, 00:16:52.295 "method": "bdev_nvme_attach_controller" 00:16:52.295 }' 00:16:52.295 [2024-10-04 06:34:44.881586] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:52.295 [2024-10-04 06:34:44.881668] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87500 ] 00:16:52.554 [2024-10-04 06:34:45.023751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.554 [2024-10-04 06:34:45.111591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.554 [2024-10-04 06:34:45.111772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.554 [2024-10-04 06:34:45.111777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.832 [2024-10-04 06:34:45.313592] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:52.832 [2024-10-04 06:34:45.313651] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:52.832 I/O targets: 00:16:52.832 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:52.832 00:16:52.832 00:16:52.832 CUnit - A unit testing framework for C - Version 2.1-3 00:16:52.832 http://cunit.sourceforge.net/ 00:16:52.832 00:16:52.832 00:16:52.832 Suite: bdevio tests on: Nvme1n1 00:16:52.832 Test: blockdev write read block ...passed 00:16:52.832 Test: blockdev write zeroes read block ...passed 00:16:52.832 Test: blockdev write zeroes read no split ...passed 00:16:52.832 Test: blockdev write zeroes read split ...passed 00:16:52.832 Test: blockdev write zeroes read split partial ...passed 00:16:52.832 Test: blockdev reset ...[2024-10-04 06:34:45.430278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:52.832 [2024-10-04 06:34:45.430373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d1ee0 (9): Bad file descriptor 00:16:52.832 passed 00:16:52.832 Test: blockdev write read 8 blocks ...[2024-10-04 06:34:45.443059] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:52.832 passed 00:16:52.832 Test: blockdev write read size > 128k ...passed 00:16:52.832 Test: blockdev write read invalid size ...passed 00:16:52.832 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:52.832 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:52.832 Test: blockdev write read max offset ...passed 00:16:53.116 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:53.116 Test: blockdev writev readv 8 blocks ...passed 00:16:53.116 Test: blockdev writev readv 30 x 1block ...passed 00:16:53.116 Test: blockdev writev readv block ...passed 00:16:53.116 Test: blockdev writev readv size > 128k ...passed 00:16:53.116 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:53.116 Test: blockdev comparev and writev ...[2024-10-04 06:34:45.613530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.613579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.613608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.613619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.613911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.613930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.613946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.613955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.614272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.614288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.614303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.614313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.614570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.614586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.614600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.116 [2024-10-04 06:34:45.614610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:53.116 passed 00:16:53.116 Test: blockdev nvme passthru rw ...passed 00:16:53.116 Test: blockdev nvme passthru vendor specific ...[2024-10-04 06:34:45.696178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.116 [2024-10-04 06:34:45.696203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.696344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.116 [2024-10-04 06:34:45.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:53.116 passed 00:16:53.116 Test: blockdev nvme admin passthru ...[2024-10-04 06:34:45.696484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.116 [2024-10-04 06:34:45.696504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:53.116 [2024-10-04 06:34:45.696609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.116 [2024-10-04 06:34:45.696623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:53.116 passed 00:16:53.116 Test: blockdev copy ...passed 00:16:53.116 00:16:53.116 Run Summary: Type Total Ran Passed Failed Inactive 00:16:53.116 suites 1 1 n/a 0 0 00:16:53.116 tests 23 23 23 0 0 00:16:53.116 asserts 152 152 152 0 n/a 00:16:53.116 00:16:53.116 Elapsed time = 0.879 seconds 00:16:53.375 06:34:46 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.376 06:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.376 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:16:53.376 06:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.376 06:34:46 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:53.376 06:34:46 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:53.376 06:34:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:53.376 06:34:46 -- nvmf/common.sh@116 -- # sync 00:16:53.635 06:34:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:53.635 06:34:46 -- nvmf/common.sh@119 -- # set +e 00:16:53.635 06:34:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:53.635 06:34:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:53.635 rmmod nvme_tcp 00:16:53.635 rmmod nvme_fabrics 00:16:53.635 rmmod nvme_keyring 00:16:53.635 06:34:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:53.635 06:34:46 -- nvmf/common.sh@123 -- # set -e 00:16:53.635 06:34:46 -- nvmf/common.sh@124 -- # return 0 00:16:53.635 06:34:46 -- nvmf/common.sh@477 -- # '[' -n 87443 ']' 00:16:53.635 06:34:46 -- nvmf/common.sh@478 -- # killprocess 87443 00:16:53.635 06:34:46 -- common/autotest_common.sh@926 -- # '[' -z 87443 ']' 00:16:53.635 06:34:46 -- common/autotest_common.sh@930 -- # kill -0 87443 00:16:53.635 06:34:46 -- common/autotest_common.sh@931 -- # uname 00:16:53.635 06:34:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.635 06:34:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87443 00:16:53.635 06:34:46 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:53.635 06:34:46 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:53.635 killing process with pid 87443 00:16:53.635 06:34:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87443' 00:16:53.635 06:34:46 -- common/autotest_common.sh@945 -- # kill 87443 00:16:53.635 06:34:46 -- common/autotest_common.sh@950 -- # wait 87443 00:16:53.893 06:34:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:53.893 06:34:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:53.893 06:34:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:53.893 06:34:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.894 06:34:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:53.894 06:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.894 06:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.894 06:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.894 06:34:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:53.894 00:16:53.894 real 0m3.240s 00:16:53.894 user 0m12.152s 00:16:53.894 sys 0m0.840s 00:16:53.894 06:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.894 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:16:53.894 ************************************ 00:16:53.894 END TEST nvmf_bdevio 00:16:53.894 ************************************ 00:16:53.894 06:34:46 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:53.894 06:34:46 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:53.894 06:34:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:53.894 06:34:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:53.894 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:16:53.894 ************************************ 00:16:53.894 START TEST nvmf_bdevio_no_huge 00:16:53.894 ************************************ 00:16:53.894 06:34:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:53.894 * Looking for test storage... 00:16:53.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:53.894 06:34:46 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:53.894 06:34:46 -- nvmf/common.sh@7 -- # uname -s 00:16:53.894 06:34:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.894 06:34:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.894 06:34:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.894 06:34:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.894 06:34:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.894 06:34:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.894 06:34:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.894 06:34:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.894 06:34:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.894 06:34:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.894 06:34:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:53.894 06:34:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:53.894 06:34:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.894 06:34:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.894 06:34:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:53.894 06:34:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:53.894 06:34:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.894 06:34:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.894 06:34:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.894 06:34:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 06:34:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 06:34:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 06:34:46 -- paths/export.sh@5 -- # export PATH 00:16:53.894 06:34:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.894 06:34:46 -- nvmf/common.sh@46 -- # : 0 00:16:53.894 06:34:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:53.894 06:34:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:53.894 06:34:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:53.894 06:34:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.894 06:34:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.894 06:34:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:53.894 06:34:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:53.894 06:34:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:53.894 06:34:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.894 06:34:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.894 06:34:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:53.894 06:34:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:53.894 06:34:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.894 06:34:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:53.894 06:34:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:53.894 06:34:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:53.894 06:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.894 06:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.894 06:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.152 06:34:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:54.152 06:34:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:54.152 06:34:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:54.152 06:34:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:54.152 06:34:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:54.152 06:34:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:54.152 06:34:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.152 06:34:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.152 06:34:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:54.152 06:34:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:54.152 06:34:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:54.152 06:34:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:54.152 06:34:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:54.152 06:34:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.152 06:34:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:54.152 06:34:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:54.152 06:34:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:54.152 06:34:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:54.152 06:34:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:54.152 06:34:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:54.152 Cannot find device "nvmf_tgt_br" 00:16:54.152 06:34:46 -- nvmf/common.sh@154 -- # true 00:16:54.152 06:34:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:54.152 Cannot find device "nvmf_tgt_br2" 00:16:54.152 06:34:46 -- nvmf/common.sh@155 -- # true 00:16:54.152 06:34:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:54.152 06:34:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:54.152 Cannot find device "nvmf_tgt_br" 00:16:54.152 06:34:46 -- nvmf/common.sh@157 -- # true 00:16:54.152 06:34:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:54.152 Cannot find device "nvmf_tgt_br2" 00:16:54.152 06:34:46 -- nvmf/common.sh@158 -- # true 00:16:54.152 06:34:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:54.152 06:34:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:54.152 06:34:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:54.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.152 06:34:46 -- nvmf/common.sh@161 -- # true 00:16:54.152 06:34:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:54.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:54.152 06:34:46 -- nvmf/common.sh@162 -- # true 00:16:54.152 06:34:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:54.152 06:34:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:54.152 06:34:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:54.152 06:34:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:54.152 06:34:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:54.152 06:34:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:54.152 06:34:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:54.152 06:34:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:54.152 06:34:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:54.152 06:34:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:54.152 06:34:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:54.152 06:34:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:54.152 06:34:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:54.152 06:34:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:54.413 06:34:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:54.413 06:34:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:54.413 06:34:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:54.413 06:34:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:54.413 06:34:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:54.413 06:34:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:54.413 06:34:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:54.413 06:34:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:54.413 06:34:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:54.413 06:34:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:54.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:16:54.413 00:16:54.413 --- 10.0.0.2 ping statistics --- 00:16:54.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.413 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:54.413 06:34:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:54.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:54.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:16:54.413 00:16:54.413 --- 10.0.0.3 ping statistics --- 00:16:54.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.413 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:54.413 06:34:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:54.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:54.413 00:16:54.413 --- 10.0.0.1 ping statistics --- 00:16:54.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.413 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:54.413 06:34:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.413 06:34:46 -- nvmf/common.sh@421 -- # return 0 00:16:54.413 06:34:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:54.413 06:34:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.413 06:34:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:54.413 06:34:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:54.413 06:34:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.413 06:34:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:54.413 06:34:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:54.413 06:34:46 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:54.413 06:34:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:54.413 06:34:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:54.413 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:16:54.413 06:34:46 -- nvmf/common.sh@469 -- # nvmfpid=87676 00:16:54.413 06:34:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:54.413 06:34:46 -- nvmf/common.sh@470 -- # waitforlisten 87676 00:16:54.413 06:34:46 -- common/autotest_common.sh@819 -- # '[' -z 87676 ']' 00:16:54.413 06:34:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.413 06:34:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.413 06:34:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.413 06:34:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.413 06:34:46 -- common/autotest_common.sh@10 -- # set +x 00:16:54.413 [2024-10-04 06:34:46.998732] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:54.413 [2024-10-04 06:34:46.998810] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:54.672 [2024-10-04 06:34:47.134154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.672 [2024-10-04 06:34:47.218082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:54.672 [2024-10-04 06:34:47.218224] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.672 [2024-10-04 06:34:47.218237] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.672 [2024-10-04 06:34:47.218511] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.672 [2024-10-04 06:34:47.218678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:54.672 [2024-10-04 06:34:47.218846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:54.672 [2024-10-04 06:34:47.218960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:54.672 [2024-10-04 06:34:47.219065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.608 06:34:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.608 06:34:48 -- common/autotest_common.sh@852 -- # return 0 00:16:55.608 06:34:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:55.608 06:34:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:55.608 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.608 06:34:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.608 06:34:48 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.608 06:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.608 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.608 [2024-10-04 06:34:48.092809] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.608 06:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.608 06:34:48 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:55.608 06:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.608 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.608 Malloc0 00:16:55.609 06:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.609 06:34:48 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:55.609 06:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.609 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.609 06:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.609 06:34:48 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.609 06:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.609 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.609 06:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.609 06:34:48 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.609 06:34:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:55.609 06:34:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.609 [2024-10-04 06:34:48.135359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.609 06:34:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:55.609 06:34:48 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:55.609 06:34:48 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:55.609 06:34:48 -- nvmf/common.sh@520 -- # config=() 00:16:55.609 06:34:48 -- nvmf/common.sh@520 -- # local subsystem config 00:16:55.609 06:34:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:55.609 06:34:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:55.609 { 00:16:55.609 "params": { 00:16:55.609 "name": "Nvme$subsystem", 00:16:55.609 "trtype": "$TEST_TRANSPORT", 00:16:55.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.609 "adrfam": "ipv4", 00:16:55.609 "trsvcid": "$NVMF_PORT", 00:16:55.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.609 "hdgst": ${hdgst:-false}, 00:16:55.609 "ddgst": ${ddgst:-false} 00:16:55.609 }, 00:16:55.609 "method": "bdev_nvme_attach_controller" 00:16:55.609 } 00:16:55.609 EOF 00:16:55.609 )") 00:16:55.609 06:34:48 -- nvmf/common.sh@542 -- # cat 00:16:55.609 06:34:48 -- nvmf/common.sh@544 -- # jq . 00:16:55.609 06:34:48 -- nvmf/common.sh@545 -- # IFS=, 00:16:55.609 06:34:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:55.609 "params": { 00:16:55.609 "name": "Nvme1", 00:16:55.609 "trtype": "tcp", 00:16:55.609 "traddr": "10.0.0.2", 00:16:55.609 "adrfam": "ipv4", 00:16:55.609 "trsvcid": "4420", 00:16:55.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.609 "hdgst": false, 00:16:55.609 "ddgst": false 00:16:55.609 }, 00:16:55.609 "method": "bdev_nvme_attach_controller" 00:16:55.609 }' 00:16:55.609 [2024-10-04 06:34:48.195104] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:55.609 [2024-10-04 06:34:48.195202] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87730 ] 00:16:55.867 [2024-10-04 06:34:48.336031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.867 [2024-10-04 06:34:48.479196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.867 [2024-10-04 06:34:48.479372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.867 [2024-10-04 06:34:48.479377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.125 [2024-10-04 06:34:48.675786] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:56.125 [2024-10-04 06:34:48.675863] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:56.125 I/O targets: 00:16:56.125 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:56.125 00:16:56.125 00:16:56.125 CUnit - A unit testing framework for C - Version 2.1-3 00:16:56.125 http://cunit.sourceforge.net/ 00:16:56.125 00:16:56.125 00:16:56.125 Suite: bdevio tests on: Nvme1n1 00:16:56.125 Test: blockdev write read block ...passed 00:16:56.125 Test: blockdev write zeroes read block ...passed 00:16:56.125 Test: blockdev write zeroes read no split ...passed 00:16:56.125 Test: blockdev write zeroes read split ...passed 00:16:56.125 Test: blockdev write zeroes read split partial ...passed 00:16:56.125 Test: blockdev reset ...[2024-10-04 06:34:48.802405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:56.125 [2024-10-04 06:34:48.802501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a63d10 (9): Bad file descriptor 00:16:56.383 passed 00:16:56.383 Test: blockdev write read 8 blocks ...[2024-10-04 06:34:48.814969] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:56.383 passed 00:16:56.383 Test: blockdev write read size > 128k ...passed 00:16:56.383 Test: blockdev write read invalid size ...passed 00:16:56.383 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:56.383 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:56.383 Test: blockdev write read max offset ...passed 00:16:56.383 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:56.383 Test: blockdev writev readv 8 blocks ...passed 00:16:56.383 Test: blockdev writev readv 30 x 1block ...passed 00:16:56.383 Test: blockdev writev readv block ...passed 00:16:56.383 Test: blockdev writev readv size > 128k ...passed 00:16:56.383 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:56.383 Test: blockdev comparev and writev ...[2024-10-04 06:34:48.986505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.383 [2024-10-04 06:34:48.986541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:56.383 [2024-10-04 06:34:48.986560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.383 [2024-10-04 06:34:48.986574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:56.383 [2024-10-04 06:34:48.986873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.383 [2024-10-04 06:34:48.986890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:56.383 [2024-10-04 06:34:48.986905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.383 [2024-10-04 06:34:48.986915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:56.383 [2024-10-04 06:34:48.987229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.383 [2024-10-04 06:34:48.987245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:56.383 [2024-10-04 06:34:48.987260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.383 [2024-10-04 06:34:48.987269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:56.383 [2024-10-04 06:34:48.987575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.384 [2024-10-04 06:34:48.987590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:56.384 [2024-10-04 06:34:48.987605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:56.384 [2024-10-04 06:34:48.987614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:56.384 passed 00:16:56.642 Test: blockdev nvme passthru rw ...passed 00:16:56.642 Test: blockdev nvme passthru vendor specific ...[2024-10-04 06:34:49.069149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:56.642 [2024-10-04 06:34:49.069175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:56.642 [2024-10-04 06:34:49.069286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:56.642 [2024-10-04 06:34:49.069301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:56.642 [2024-10-04 06:34:49.069429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:56.642 [2024-10-04 06:34:49.069444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:56.642 [2024-10-04 06:34:49.069571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:56.642 [2024-10-04 06:34:49.069585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:56.642 passed 00:16:56.642 Test: blockdev nvme admin passthru ...passed 00:16:56.642 Test: blockdev copy ...passed 00:16:56.642 00:16:56.642 Run Summary: Type Total Ran Passed Failed Inactive 00:16:56.642 suites 1 1 n/a 0 0 00:16:56.642 tests 23 23 23 0 0 00:16:56.642 asserts 152 152 152 0 n/a 00:16:56.642 00:16:56.642 Elapsed time = 0.898 seconds 00:16:56.900 06:34:49 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.900 06:34:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:56.900 06:34:49 -- common/autotest_common.sh@10 -- # set +x 00:16:56.900 06:34:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:56.900 06:34:49 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:56.900 06:34:49 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:56.900 06:34:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.900 06:34:49 -- nvmf/common.sh@116 -- # sync 00:16:56.900 06:34:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:56.900 06:34:49 -- nvmf/common.sh@119 -- # set +e 00:16:56.900 06:34:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.900 06:34:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:57.159 rmmod nvme_tcp 00:16:57.159 rmmod nvme_fabrics 00:16:57.159 rmmod nvme_keyring 00:16:57.159 06:34:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:57.159 06:34:49 -- nvmf/common.sh@123 -- # set -e 00:16:57.159 06:34:49 -- nvmf/common.sh@124 -- # return 0 00:16:57.159 06:34:49 -- nvmf/common.sh@477 -- # '[' -n 87676 ']' 00:16:57.159 06:34:49 -- nvmf/common.sh@478 -- # killprocess 87676 00:16:57.159 06:34:49 -- common/autotest_common.sh@926 -- # '[' -z 87676 ']' 00:16:57.159 06:34:49 -- common/autotest_common.sh@930 -- # kill -0 87676 00:16:57.159 06:34:49 -- common/autotest_common.sh@931 -- # uname 00:16:57.159 06:34:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:57.159 06:34:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87676 00:16:57.159 06:34:49 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:57.159 06:34:49 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:57.159 killing process with pid 87676 00:16:57.159 06:34:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87676' 00:16:57.159 06:34:49 -- common/autotest_common.sh@945 -- # kill 87676 00:16:57.159 06:34:49 -- common/autotest_common.sh@950 -- # wait 87676 00:16:57.418 06:34:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:57.418 06:34:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:57.418 06:34:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:57.418 06:34:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.418 06:34:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:57.418 06:34:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.418 06:34:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.418 06:34:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.418 06:34:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:57.418 00:16:57.418 real 0m3.570s 00:16:57.418 user 0m13.094s 00:16:57.418 sys 0m1.326s 00:16:57.418 06:34:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.418 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:16:57.418 ************************************ 00:16:57.418 END TEST nvmf_bdevio_no_huge 00:16:57.418 ************************************ 00:16:57.418 06:34:50 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:57.418 06:34:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:57.418 06:34:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:57.418 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:16:57.418 ************************************ 00:16:57.418 START TEST nvmf_tls 00:16:57.418 ************************************ 00:16:57.418 06:34:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:57.677 * Looking for test storage... 00:16:57.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:57.677 06:34:50 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:57.677 06:34:50 -- nvmf/common.sh@7 -- # uname -s 00:16:57.677 06:34:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.677 06:34:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.677 06:34:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.677 06:34:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.677 06:34:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.677 06:34:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.677 06:34:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.677 06:34:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.677 06:34:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.677 06:34:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.677 06:34:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:57.677 06:34:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:16:57.677 06:34:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.677 06:34:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.677 06:34:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:57.677 06:34:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:57.677 06:34:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.677 06:34:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.677 06:34:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.677 06:34:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.677 06:34:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.677 06:34:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.677 06:34:50 -- paths/export.sh@5 -- # export PATH 00:16:57.677 06:34:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.677 06:34:50 -- nvmf/common.sh@46 -- # : 0 00:16:57.677 06:34:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:57.677 06:34:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:57.677 06:34:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:57.677 06:34:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.677 06:34:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.677 06:34:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:57.677 06:34:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:57.677 06:34:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:57.677 06:34:50 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.677 06:34:50 -- target/tls.sh@71 -- # nvmftestinit 00:16:57.677 06:34:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:57.677 06:34:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.677 06:34:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:57.677 06:34:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:57.677 06:34:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:57.677 06:34:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.677 06:34:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.677 06:34:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.677 06:34:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:57.677 06:34:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:57.677 06:34:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:57.677 06:34:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:57.677 06:34:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:57.677 06:34:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:57.677 06:34:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.677 06:34:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.677 06:34:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:57.677 06:34:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:57.677 06:34:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:57.677 06:34:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:57.677 06:34:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:57.677 06:34:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.677 06:34:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:57.677 06:34:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:57.677 06:34:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:57.677 06:34:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:57.677 06:34:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:57.677 06:34:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:57.677 Cannot find device "nvmf_tgt_br" 00:16:57.677 06:34:50 -- nvmf/common.sh@154 -- # true 00:16:57.677 06:34:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:57.677 Cannot find device "nvmf_tgt_br2" 00:16:57.677 06:34:50 -- nvmf/common.sh@155 -- # true 00:16:57.677 06:34:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:57.677 06:34:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:57.677 Cannot find device "nvmf_tgt_br" 00:16:57.677 06:34:50 -- nvmf/common.sh@157 -- # true 00:16:57.677 06:34:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:57.677 Cannot find device "nvmf_tgt_br2" 00:16:57.677 06:34:50 -- nvmf/common.sh@158 -- # true 00:16:57.677 06:34:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:57.677 06:34:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:57.677 06:34:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.677 06:34:50 -- nvmf/common.sh@161 -- # true 00:16:57.677 06:34:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.677 06:34:50 -- nvmf/common.sh@162 -- # true 00:16:57.677 06:34:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:57.677 06:34:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:57.677 06:34:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:57.677 06:34:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:57.936 06:34:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:57.936 06:34:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:57.936 06:34:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:57.936 06:34:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:57.936 06:34:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:57.936 06:34:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:57.936 06:34:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:57.936 06:34:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:57.936 06:34:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:57.936 06:34:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:57.936 06:34:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:57.936 06:34:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:57.936 06:34:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:57.936 06:34:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:57.936 06:34:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:57.936 06:34:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.936 06:34:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.936 06:34:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.936 06:34:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.936 06:34:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:57.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:57.936 00:16:57.936 --- 10.0.0.2 ping statistics --- 00:16:57.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.936 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:57.936 06:34:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:57.936 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.936 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:16:57.936 00:16:57.936 --- 10.0.0.3 ping statistics --- 00:16:57.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.936 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:57.936 06:34:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:16:57.936 00:16:57.936 --- 10.0.0.1 ping statistics --- 00:16:57.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.936 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:57.936 06:34:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.936 06:34:50 -- nvmf/common.sh@421 -- # return 0 00:16:57.936 06:34:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:57.936 06:34:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.936 06:34:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:57.936 06:34:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:57.936 06:34:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.936 06:34:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:57.936 06:34:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:57.936 06:34:50 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:57.936 06:34:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:57.936 06:34:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:57.936 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:16:57.936 06:34:50 -- nvmf/common.sh@469 -- # nvmfpid=87908 00:16:57.936 06:34:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:57.936 06:34:50 -- nvmf/common.sh@470 -- # waitforlisten 87908 00:16:57.936 06:34:50 -- common/autotest_common.sh@819 -- # '[' -z 87908 ']' 00:16:57.936 06:34:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.936 06:34:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:57.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.936 06:34:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.936 06:34:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:57.936 06:34:50 -- common/autotest_common.sh@10 -- # set +x 00:16:58.195 [2024-10-04 06:34:50.618682] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:16:58.195 [2024-10-04 06:34:50.618766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.195 [2024-10-04 06:34:50.764325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.195 [2024-10-04 06:34:50.838929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:58.195 [2024-10-04 06:34:50.839106] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.195 [2024-10-04 06:34:50.839122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.195 [2024-10-04 06:34:50.839134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.195 [2024-10-04 06:34:50.839171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.129 06:34:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:59.129 06:34:51 -- common/autotest_common.sh@852 -- # return 0 00:16:59.129 06:34:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:59.129 06:34:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:59.129 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:16:59.129 06:34:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.129 06:34:51 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:59.129 06:34:51 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:59.387 true 00:16:59.387 06:34:51 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:59.387 06:34:51 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:59.645 06:34:52 -- target/tls.sh@82 -- # version=0 00:16:59.645 06:34:52 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:59.645 06:34:52 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:59.904 06:34:52 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:59.904 06:34:52 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:00.162 06:34:52 -- target/tls.sh@90 -- # version=13 00:17:00.162 06:34:52 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:00.162 06:34:52 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:00.421 06:34:53 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:00.421 06:34:53 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:00.679 06:34:53 -- target/tls.sh@98 -- # version=7 00:17:00.679 06:34:53 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:00.679 06:34:53 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:00.679 06:34:53 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:00.938 06:34:53 -- target/tls.sh@105 -- # ktls=false 00:17:00.938 06:34:53 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:00.938 06:34:53 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:01.197 06:34:53 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:01.197 06:34:53 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:01.456 06:34:54 -- target/tls.sh@113 -- # ktls=true 00:17:01.456 06:34:54 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:01.456 06:34:54 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:01.715 06:34:54 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:01.715 06:34:54 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:01.974 06:34:54 -- target/tls.sh@121 -- # ktls=false 00:17:01.974 06:34:54 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:01.974 06:34:54 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:01.974 06:34:54 -- target/tls.sh@49 -- # local key hash crc 00:17:01.974 06:34:54 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:01.974 06:34:54 -- target/tls.sh@51 -- # hash=01 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # gzip -1 -c 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # tail -c8 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # head -c 4 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # crc='p$H�' 00:17:01.974 06:34:54 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:01.974 06:34:54 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:01.974 06:34:54 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:01.974 06:34:54 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:01.974 06:34:54 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:01.974 06:34:54 -- target/tls.sh@49 -- # local key hash crc 00:17:01.974 06:34:54 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:01.974 06:34:54 -- target/tls.sh@51 -- # hash=01 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # gzip -1 -c 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # tail -c8 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # head -c 4 00:17:01.974 06:34:54 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:01.974 06:34:54 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:01.974 06:34:54 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:01.974 06:34:54 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:01.974 06:34:54 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:01.974 06:34:54 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:01.974 06:34:54 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:01.974 06:34:54 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:01.974 06:34:54 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:01.974 06:34:54 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:01.974 06:34:54 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:01.974 06:34:54 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:02.233 06:34:54 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:02.492 06:34:55 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:02.492 06:34:55 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:02.492 06:34:55 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:02.751 [2024-10-04 06:34:55.324671] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.751 06:34:55 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:03.010 06:34:55 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:03.269 [2024-10-04 06:34:55.832743] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:03.269 [2024-10-04 06:34:55.832982] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.269 06:34:55 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:03.527 malloc0 00:17:03.527 06:34:56 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:03.786 06:34:56 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:04.044 06:34:56 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.266 Initializing NVMe Controllers 00:17:16.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:16.266 Initialization complete. Launching workers. 00:17:16.266 ======================================================== 00:17:16.266 Latency(us) 00:17:16.266 Device Information : IOPS MiB/s Average min max 00:17:16.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10728.17 41.91 5966.74 1704.04 7928.64 00:17:16.266 ======================================================== 00:17:16.266 Total : 10728.17 41.91 5966.74 1704.04 7928.64 00:17:16.266 00:17:16.266 06:35:06 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.267 06:35:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.267 06:35:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.267 06:35:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.267 06:35:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:16.267 06:35:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.267 06:35:06 -- target/tls.sh@28 -- # bdevperf_pid=88284 00:17:16.267 06:35:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.267 06:35:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.267 06:35:06 -- target/tls.sh@31 -- # waitforlisten 88284 /var/tmp/bdevperf.sock 00:17:16.267 06:35:06 -- common/autotest_common.sh@819 -- # '[' -z 88284 ']' 00:17:16.267 06:35:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.267 06:35:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:16.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.267 06:35:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.267 06:35:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:16.267 06:35:06 -- common/autotest_common.sh@10 -- # set +x 00:17:16.267 [2024-10-04 06:35:06.885865] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:16.267 [2024-10-04 06:35:06.885973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88284 ] 00:17:16.267 [2024-10-04 06:35:07.020601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.267 [2024-10-04 06:35:07.097190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.267 06:35:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:16.267 06:35:07 -- common/autotest_common.sh@852 -- # return 0 00:17:16.267 06:35:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.267 [2024-10-04 06:35:08.015729] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.267 TLSTESTn1 00:17:16.267 06:35:08 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:16.267 Running I/O for 10 seconds... 00:17:26.244 00:17:26.244 Latency(us) 00:17:26.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.244 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:26.244 Verification LBA range: start 0x0 length 0x2000 00:17:26.244 TLSTESTn1 : 10.02 5707.46 22.29 0.00 0.00 22387.64 7447.27 22520.55 00:17:26.244 =================================================================================================================== 00:17:26.244 Total : 5707.46 22.29 0.00 0.00 22387.64 7447.27 22520.55 00:17:26.244 0 00:17:26.244 06:35:18 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.244 06:35:18 -- target/tls.sh@45 -- # killprocess 88284 00:17:26.244 06:35:18 -- common/autotest_common.sh@926 -- # '[' -z 88284 ']' 00:17:26.244 06:35:18 -- common/autotest_common.sh@930 -- # kill -0 88284 00:17:26.244 06:35:18 -- common/autotest_common.sh@931 -- # uname 00:17:26.244 06:35:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.244 06:35:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88284 00:17:26.244 06:35:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:26.244 06:35:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:26.244 06:35:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88284' 00:17:26.244 killing process with pid 88284 00:17:26.244 06:35:18 -- common/autotest_common.sh@945 -- # kill 88284 00:17:26.244 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.244 00:17:26.244 Latency(us) 00:17:26.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.244 =================================================================================================================== 00:17:26.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.244 06:35:18 -- common/autotest_common.sh@950 -- # wait 88284 00:17:26.244 06:35:18 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:26.244 06:35:18 -- common/autotest_common.sh@640 -- # local es=0 00:17:26.244 06:35:18 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:26.244 06:35:18 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:26.244 06:35:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.244 06:35:18 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:26.244 06:35:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.244 06:35:18 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:26.244 06:35:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:26.244 06:35:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:26.244 06:35:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:26.244 06:35:18 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:26.244 06:35:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.244 06:35:18 -- target/tls.sh@28 -- # bdevperf_pid=88437 00:17:26.244 06:35:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.244 06:35:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.244 06:35:18 -- target/tls.sh@31 -- # waitforlisten 88437 /var/tmp/bdevperf.sock 00:17:26.244 06:35:18 -- common/autotest_common.sh@819 -- # '[' -z 88437 ']' 00:17:26.244 06:35:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.244 06:35:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:26.244 06:35:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.244 06:35:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:26.244 06:35:18 -- common/autotest_common.sh@10 -- # set +x 00:17:26.244 [2024-10-04 06:35:18.603385] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:26.244 [2024-10-04 06:35:18.603869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88437 ] 00:17:26.244 [2024-10-04 06:35:18.735037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.244 [2024-10-04 06:35:18.814680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.181 06:35:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:27.181 06:35:19 -- common/autotest_common.sh@852 -- # return 0 00:17:27.181 06:35:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:27.181 [2024-10-04 06:35:19.797429] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.182 [2024-10-04 06:35:19.803072] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:27.182 [2024-10-04 06:35:19.804006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b847c0 (107): Transport endpoint is not connected 00:17:27.182 [2024-10-04 06:35:19.804994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b847c0 (9): Bad file descriptor 00:17:27.182 [2024-10-04 06:35:19.805990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:27.182 [2024-10-04 06:35:19.806022] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:27.182 [2024-10-04 06:35:19.806032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:27.182 2024/10/04 06:35:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:27.182 request: 00:17:27.182 { 00:17:27.182 "method": "bdev_nvme_attach_controller", 00:17:27.182 "params": { 00:17:27.182 "name": "TLSTEST", 00:17:27.182 "trtype": "tcp", 00:17:27.182 "traddr": "10.0.0.2", 00:17:27.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:27.182 "adrfam": "ipv4", 00:17:27.182 "trsvcid": "4420", 00:17:27.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.182 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:27.182 } 00:17:27.182 } 00:17:27.182 Got JSON-RPC error response 00:17:27.182 GoRPCClient: error on JSON-RPC call 00:17:27.182 06:35:19 -- target/tls.sh@36 -- # killprocess 88437 00:17:27.182 06:35:19 -- common/autotest_common.sh@926 -- # '[' -z 88437 ']' 00:17:27.182 06:35:19 -- common/autotest_common.sh@930 -- # kill -0 88437 00:17:27.182 06:35:19 -- common/autotest_common.sh@931 -- # uname 00:17:27.182 06:35:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:27.182 06:35:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88437 00:17:27.440 killing process with pid 88437 00:17:27.440 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.440 00:17:27.440 Latency(us) 00:17:27.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.440 =================================================================================================================== 00:17:27.440 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:27.440 06:35:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:27.440 06:35:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:27.440 06:35:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88437' 00:17:27.440 06:35:19 -- common/autotest_common.sh@945 -- # kill 88437 00:17:27.440 06:35:19 -- common/autotest_common.sh@950 -- # wait 88437 00:17:27.700 06:35:20 -- target/tls.sh@37 -- # return 1 00:17:27.700 06:35:20 -- common/autotest_common.sh@643 -- # es=1 00:17:27.700 06:35:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:27.700 06:35:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:27.700 06:35:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:27.700 06:35:20 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:27.700 06:35:20 -- common/autotest_common.sh@640 -- # local es=0 00:17:27.700 06:35:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:27.700 06:35:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:27.700 06:35:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.700 06:35:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:27.700 06:35:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.700 06:35:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:27.700 06:35:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.700 06:35:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.700 06:35:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:27.700 06:35:20 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:27.700 06:35:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.700 06:35:20 -- target/tls.sh@28 -- # bdevperf_pid=88478 00:17:27.700 06:35:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.700 06:35:20 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.700 06:35:20 -- target/tls.sh@31 -- # waitforlisten 88478 /var/tmp/bdevperf.sock 00:17:27.700 06:35:20 -- common/autotest_common.sh@819 -- # '[' -z 88478 ']' 00:17:27.700 06:35:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.700 06:35:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.700 06:35:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.700 06:35:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.700 06:35:20 -- common/autotest_common.sh@10 -- # set +x 00:17:27.700 [2024-10-04 06:35:20.186881] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:27.700 [2024-10-04 06:35:20.187351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88478 ] 00:17:27.700 [2024-10-04 06:35:20.324254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.959 [2024-10-04 06:35:20.404045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.527 06:35:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.527 06:35:21 -- common/autotest_common.sh@852 -- # return 0 00:17:28.527 06:35:21 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:28.786 [2024-10-04 06:35:21.372531] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.786 [2024-10-04 06:35:21.383082] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:28.786 [2024-10-04 06:35:21.383127] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:28.786 [2024-10-04 06:35:21.383210] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:28.786 [2024-10-04 06:35:21.383986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e87c0 (107): Transport endpoint is not connected 00:17:28.786 [2024-10-04 06:35:21.384975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e87c0 (9): Bad file descriptor 00:17:28.786 [2024-10-04 06:35:21.385971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:28.786 [2024-10-04 06:35:21.386322] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:28.787 [2024-10-04 06:35:21.386337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:28.787 2024/10/04 06:35:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:28.787 request: 00:17:28.787 { 00:17:28.787 "method": "bdev_nvme_attach_controller", 00:17:28.787 "params": { 00:17:28.787 "name": "TLSTEST", 00:17:28.787 "trtype": "tcp", 00:17:28.787 "traddr": "10.0.0.2", 00:17:28.787 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:28.787 "adrfam": "ipv4", 00:17:28.787 "trsvcid": "4420", 00:17:28.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.787 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:28.787 } 00:17:28.787 } 00:17:28.787 Got JSON-RPC error response 00:17:28.787 GoRPCClient: error on JSON-RPC call 00:17:28.787 06:35:21 -- target/tls.sh@36 -- # killprocess 88478 00:17:28.787 06:35:21 -- common/autotest_common.sh@926 -- # '[' -z 88478 ']' 00:17:28.787 06:35:21 -- common/autotest_common.sh@930 -- # kill -0 88478 00:17:28.787 06:35:21 -- common/autotest_common.sh@931 -- # uname 00:17:28.787 06:35:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.787 06:35:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88478 00:17:28.787 killing process with pid 88478 00:17:28.787 Received shutdown signal, test time was about 10.000000 seconds 00:17:28.787 00:17:28.787 Latency(us) 00:17:28.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.787 =================================================================================================================== 00:17:28.787 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.787 06:35:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:28.787 06:35:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:28.787 06:35:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88478' 00:17:28.787 06:35:21 -- common/autotest_common.sh@945 -- # kill 88478 00:17:28.787 06:35:21 -- common/autotest_common.sh@950 -- # wait 88478 00:17:29.046 06:35:21 -- target/tls.sh@37 -- # return 1 00:17:29.046 06:35:21 -- common/autotest_common.sh@643 -- # es=1 00:17:29.046 06:35:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:29.046 06:35:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:29.046 06:35:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:29.046 06:35:21 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:29.046 06:35:21 -- common/autotest_common.sh@640 -- # local es=0 00:17:29.046 06:35:21 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:29.046 06:35:21 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:29.046 06:35:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:29.046 06:35:21 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:29.046 06:35:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:29.046 06:35:21 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:29.046 06:35:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:29.046 06:35:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:29.046 06:35:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:29.046 06:35:21 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:29.046 06:35:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.046 06:35:21 -- target/tls.sh@28 -- # bdevperf_pid=88528 00:17:29.046 06:35:21 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.046 06:35:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.046 06:35:21 -- target/tls.sh@31 -- # waitforlisten 88528 /var/tmp/bdevperf.sock 00:17:29.046 06:35:21 -- common/autotest_common.sh@819 -- # '[' -z 88528 ']' 00:17:29.046 06:35:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.046 06:35:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.046 06:35:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.046 06:35:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.046 06:35:21 -- common/autotest_common.sh@10 -- # set +x 00:17:29.304 [2024-10-04 06:35:21.759308] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:29.304 [2024-10-04 06:35:21.759444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88528 ] 00:17:29.304 [2024-10-04 06:35:21.890441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.304 [2024-10-04 06:35:21.960810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.259 06:35:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.259 06:35:22 -- common/autotest_common.sh@852 -- # return 0 00:17:30.259 06:35:22 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:30.259 [2024-10-04 06:35:22.899269] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.259 [2024-10-04 06:35:22.903972] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:30.259 [2024-10-04 06:35:22.904020] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:30.259 [2024-10-04 06:35:22.904091] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:30.259 [2024-10-04 06:35:22.904650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16db7c0 (107): Transport endpoint is not connected 00:17:30.259 [2024-10-04 06:35:22.905634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16db7c0 (9): Bad file descriptor 00:17:30.259 [2024-10-04 06:35:22.906631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:30.259 [2024-10-04 06:35:22.906651] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:30.259 [2024-10-04 06:35:22.906667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:30.259 2024/10/04 06:35:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:30.259 request: 00:17:30.259 { 00:17:30.259 "method": "bdev_nvme_attach_controller", 00:17:30.259 "params": { 00:17:30.259 "name": "TLSTEST", 00:17:30.259 "trtype": "tcp", 00:17:30.259 "traddr": "10.0.0.2", 00:17:30.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.259 "adrfam": "ipv4", 00:17:30.259 "trsvcid": "4420", 00:17:30.259 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:30.259 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:30.259 } 00:17:30.259 } 00:17:30.259 Got JSON-RPC error response 00:17:30.259 GoRPCClient: error on JSON-RPC call 00:17:30.569 06:35:22 -- target/tls.sh@36 -- # killprocess 88528 00:17:30.569 06:35:22 -- common/autotest_common.sh@926 -- # '[' -z 88528 ']' 00:17:30.569 06:35:22 -- common/autotest_common.sh@930 -- # kill -0 88528 00:17:30.569 06:35:22 -- common/autotest_common.sh@931 -- # uname 00:17:30.569 06:35:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.569 06:35:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88528 00:17:30.569 killing process with pid 88528 00:17:30.569 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.569 00:17:30.569 Latency(us) 00:17:30.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.569 =================================================================================================================== 00:17:30.569 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.569 06:35:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:30.569 06:35:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:30.569 06:35:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88528' 00:17:30.569 06:35:22 -- common/autotest_common.sh@945 -- # kill 88528 00:17:30.569 06:35:22 -- common/autotest_common.sh@950 -- # wait 88528 00:17:30.569 06:35:23 -- target/tls.sh@37 -- # return 1 00:17:30.569 06:35:23 -- common/autotest_common.sh@643 -- # es=1 00:17:30.569 06:35:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:30.569 06:35:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:30.569 06:35:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:30.569 06:35:23 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:30.569 06:35:23 -- common/autotest_common.sh@640 -- # local es=0 00:17:30.569 06:35:23 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:30.569 06:35:23 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:30.569 06:35:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:30.569 06:35:23 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:30.569 06:35:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:30.569 06:35:23 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:30.569 06:35:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.569 06:35:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:30.569 06:35:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:30.569 06:35:23 -- target/tls.sh@23 -- # psk= 00:17:30.569 06:35:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.569 06:35:23 -- target/tls.sh@28 -- # bdevperf_pid=88569 00:17:30.569 06:35:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.569 06:35:23 -- target/tls.sh@31 -- # waitforlisten 88569 /var/tmp/bdevperf.sock 00:17:30.569 06:35:23 -- common/autotest_common.sh@819 -- # '[' -z 88569 ']' 00:17:30.569 06:35:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.570 06:35:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.570 06:35:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.570 06:35:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.570 06:35:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.570 06:35:23 -- common/autotest_common.sh@10 -- # set +x 00:17:30.831 [2024-10-04 06:35:23.257381] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:30.831 [2024-10-04 06:35:23.257490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88569 ] 00:17:30.831 [2024-10-04 06:35:23.391872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.831 [2024-10-04 06:35:23.453051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.766 06:35:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.766 06:35:24 -- common/autotest_common.sh@852 -- # return 0 00:17:31.766 06:35:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:31.766 [2024-10-04 06:35:24.423391] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:31.766 [2024-10-04 06:35:24.424871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb090 (9): Bad file descriptor 00:17:31.766 [2024-10-04 06:35:24.425864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:31.766 [2024-10-04 06:35:24.425882] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:31.766 [2024-10-04 06:35:24.425897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:31.766 2024/10/04 06:35:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:31.766 request: 00:17:31.766 { 00:17:31.766 "method": "bdev_nvme_attach_controller", 00:17:31.766 "params": { 00:17:31.766 "name": "TLSTEST", 00:17:31.766 "trtype": "tcp", 00:17:31.766 "traddr": "10.0.0.2", 00:17:31.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.766 "adrfam": "ipv4", 00:17:31.766 "trsvcid": "4420", 00:17:31.766 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:31.766 } 00:17:31.766 } 00:17:31.766 Got JSON-RPC error response 00:17:31.766 GoRPCClient: error on JSON-RPC call 00:17:32.025 06:35:24 -- target/tls.sh@36 -- # killprocess 88569 00:17:32.025 06:35:24 -- common/autotest_common.sh@926 -- # '[' -z 88569 ']' 00:17:32.025 06:35:24 -- common/autotest_common.sh@930 -- # kill -0 88569 00:17:32.025 06:35:24 -- common/autotest_common.sh@931 -- # uname 00:17:32.025 06:35:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.025 06:35:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88569 00:17:32.025 killing process with pid 88569 00:17:32.025 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.025 00:17:32.025 Latency(us) 00:17:32.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.025 =================================================================================================================== 00:17:32.025 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.025 06:35:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:32.025 06:35:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:32.025 06:35:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88569' 00:17:32.025 06:35:24 -- common/autotest_common.sh@945 -- # kill 88569 00:17:32.025 06:35:24 -- common/autotest_common.sh@950 -- # wait 88569 00:17:32.284 06:35:24 -- target/tls.sh@37 -- # return 1 00:17:32.284 06:35:24 -- common/autotest_common.sh@643 -- # es=1 00:17:32.284 06:35:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:32.284 06:35:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:32.284 06:35:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:32.284 06:35:24 -- target/tls.sh@167 -- # killprocess 87908 00:17:32.284 06:35:24 -- common/autotest_common.sh@926 -- # '[' -z 87908 ']' 00:17:32.284 06:35:24 -- common/autotest_common.sh@930 -- # kill -0 87908 00:17:32.284 06:35:24 -- common/autotest_common.sh@931 -- # uname 00:17:32.284 06:35:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.284 06:35:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87908 00:17:32.284 killing process with pid 87908 00:17:32.284 06:35:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:32.284 06:35:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:32.284 06:35:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87908' 00:17:32.284 06:35:24 -- common/autotest_common.sh@945 -- # kill 87908 00:17:32.284 06:35:24 -- common/autotest_common.sh@950 -- # wait 87908 00:17:32.543 06:35:25 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:32.543 06:35:25 -- target/tls.sh@49 -- # local key hash crc 00:17:32.543 06:35:25 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:32.543 06:35:25 -- target/tls.sh@51 -- # hash=02 00:17:32.543 06:35:25 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:32.543 06:35:25 -- target/tls.sh@52 -- # gzip -1 -c 00:17:32.543 06:35:25 -- target/tls.sh@52 -- # tail -c8 00:17:32.543 06:35:25 -- target/tls.sh@52 -- # head -c 4 00:17:32.543 06:35:25 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:32.543 06:35:25 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:32.543 06:35:25 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:32.543 06:35:25 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:32.543 06:35:25 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:32.543 06:35:25 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:32.543 06:35:25 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:32.543 06:35:25 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:32.543 06:35:25 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:32.543 06:35:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:32.543 06:35:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:32.543 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:17:32.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.543 06:35:25 -- nvmf/common.sh@469 -- # nvmfpid=88634 00:17:32.543 06:35:25 -- nvmf/common.sh@470 -- # waitforlisten 88634 00:17:32.543 06:35:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:32.543 06:35:25 -- common/autotest_common.sh@819 -- # '[' -z 88634 ']' 00:17:32.543 06:35:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.543 06:35:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:32.543 06:35:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.543 06:35:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:32.543 06:35:25 -- common/autotest_common.sh@10 -- # set +x 00:17:32.543 [2024-10-04 06:35:25.083494] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:32.543 [2024-10-04 06:35:25.083590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.543 [2024-10-04 06:35:25.220230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.801 [2024-10-04 06:35:25.289423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:32.801 [2024-10-04 06:35:25.289571] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.801 [2024-10-04 06:35:25.289583] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.801 [2024-10-04 06:35:25.289591] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.801 [2024-10-04 06:35:25.289620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.368 06:35:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:33.368 06:35:26 -- common/autotest_common.sh@852 -- # return 0 00:17:33.368 06:35:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:33.368 06:35:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:33.368 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:17:33.626 06:35:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.626 06:35:26 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.626 06:35:26 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.626 06:35:26 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.885 [2024-10-04 06:35:26.307030] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.885 06:35:26 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.143 06:35:26 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:34.143 [2024-10-04 06:35:26.791118] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:34.143 [2024-10-04 06:35:26.791405] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.143 06:35:26 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:34.710 malloc0 00:17:34.710 06:35:27 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:34.710 06:35:27 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.968 06:35:27 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:34.968 06:35:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:34.968 06:35:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:34.968 06:35:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:34.968 06:35:27 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:34.968 06:35:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:34.968 06:35:27 -- target/tls.sh@28 -- # bdevperf_pid=88740 00:17:34.968 06:35:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:34.968 06:35:27 -- target/tls.sh@31 -- # waitforlisten 88740 /var/tmp/bdevperf.sock 00:17:34.968 06:35:27 -- common/autotest_common.sh@819 -- # '[' -z 88740 ']' 00:17:34.968 06:35:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:34.968 06:35:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.968 06:35:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.968 06:35:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.968 06:35:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.968 06:35:27 -- common/autotest_common.sh@10 -- # set +x 00:17:34.968 [2024-10-04 06:35:27.643630] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:34.968 [2024-10-04 06:35:27.643744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88740 ] 00:17:35.226 [2024-10-04 06:35:27.781908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.226 [2024-10-04 06:35:27.857620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.159 06:35:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:36.159 06:35:28 -- common/autotest_common.sh@852 -- # return 0 00:17:36.159 06:35:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.417 [2024-10-04 06:35:28.892317] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.417 TLSTESTn1 00:17:36.417 06:35:28 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:36.417 Running I/O for 10 seconds... 00:17:48.621 00:17:48.621 Latency(us) 00:17:48.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.621 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.621 Verification LBA range: start 0x0 length 0x2000 00:17:48.622 TLSTESTn1 : 10.02 5675.67 22.17 0.00 0.00 22512.09 5451.40 18945.86 00:17:48.622 =================================================================================================================== 00:17:48.622 Total : 5675.67 22.17 0.00 0.00 22512.09 5451.40 18945.86 00:17:48.622 0 00:17:48.622 06:35:39 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.622 06:35:39 -- target/tls.sh@45 -- # killprocess 88740 00:17:48.622 06:35:39 -- common/autotest_common.sh@926 -- # '[' -z 88740 ']' 00:17:48.622 06:35:39 -- common/autotest_common.sh@930 -- # kill -0 88740 00:17:48.622 06:35:39 -- common/autotest_common.sh@931 -- # uname 00:17:48.622 06:35:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:48.622 06:35:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88740 00:17:48.622 killing process with pid 88740 00:17:48.622 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.622 00:17:48.622 Latency(us) 00:17:48.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.622 =================================================================================================================== 00:17:48.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.622 06:35:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:48.622 06:35:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:48.622 06:35:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88740' 00:17:48.622 06:35:39 -- common/autotest_common.sh@945 -- # kill 88740 00:17:48.622 06:35:39 -- common/autotest_common.sh@950 -- # wait 88740 00:17:48.622 06:35:39 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.622 06:35:39 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.622 06:35:39 -- common/autotest_common.sh@640 -- # local es=0 00:17:48.622 06:35:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.622 06:35:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:48.622 06:35:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:48.622 06:35:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:48.622 06:35:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:48.622 06:35:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.622 06:35:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.622 06:35:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.622 06:35:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.622 06:35:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:48.622 06:35:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.622 06:35:39 -- target/tls.sh@28 -- # bdevperf_pid=88887 00:17:48.622 06:35:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.622 06:35:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.622 06:35:39 -- target/tls.sh@31 -- # waitforlisten 88887 /var/tmp/bdevperf.sock 00:17:48.622 06:35:39 -- common/autotest_common.sh@819 -- # '[' -z 88887 ']' 00:17:48.622 06:35:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.622 06:35:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.622 06:35:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.622 06:35:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.622 06:35:39 -- common/autotest_common.sh@10 -- # set +x 00:17:48.622 [2024-10-04 06:35:39.483879] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:48.622 [2024-10-04 06:35:39.484111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88887 ] 00:17:48.622 [2024-10-04 06:35:39.612452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.622 [2024-10-04 06:35:39.683753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.622 06:35:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:48.622 06:35:40 -- common/autotest_common.sh@852 -- # return 0 00:17:48.622 06:35:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.622 [2024-10-04 06:35:40.687586] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.622 [2024-10-04 06:35:40.687915] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:48.622 2024/10/04 06:35:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:48.622 request: 00:17:48.622 { 00:17:48.622 "method": "bdev_nvme_attach_controller", 00:17:48.622 "params": { 00:17:48.622 "name": "TLSTEST", 00:17:48.622 "trtype": "tcp", 00:17:48.622 "traddr": "10.0.0.2", 00:17:48.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.622 "adrfam": "ipv4", 00:17:48.622 "trsvcid": "4420", 00:17:48.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.622 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:48.622 } 00:17:48.622 } 00:17:48.622 Got JSON-RPC error response 00:17:48.622 GoRPCClient: error on JSON-RPC call 00:17:48.622 06:35:40 -- target/tls.sh@36 -- # killprocess 88887 00:17:48.622 06:35:40 -- common/autotest_common.sh@926 -- # '[' -z 88887 ']' 00:17:48.622 06:35:40 -- common/autotest_common.sh@930 -- # kill -0 88887 00:17:48.622 06:35:40 -- common/autotest_common.sh@931 -- # uname 00:17:48.622 06:35:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:48.622 06:35:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88887 00:17:48.622 killing process with pid 88887 00:17:48.622 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.622 00:17:48.622 Latency(us) 00:17:48.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.622 =================================================================================================================== 00:17:48.622 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.622 06:35:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:48.622 06:35:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:48.622 06:35:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88887' 00:17:48.622 06:35:40 -- common/autotest_common.sh@945 -- # kill 88887 00:17:48.622 06:35:40 -- common/autotest_common.sh@950 -- # wait 88887 00:17:48.622 06:35:40 -- target/tls.sh@37 -- # return 1 00:17:48.622 06:35:40 -- common/autotest_common.sh@643 -- # es=1 00:17:48.622 06:35:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:48.622 06:35:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:48.622 06:35:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:48.622 06:35:40 -- target/tls.sh@183 -- # killprocess 88634 00:17:48.622 06:35:40 -- common/autotest_common.sh@926 -- # '[' -z 88634 ']' 00:17:48.623 06:35:40 -- common/autotest_common.sh@930 -- # kill -0 88634 00:17:48.623 06:35:40 -- common/autotest_common.sh@931 -- # uname 00:17:48.623 06:35:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:48.623 06:35:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88634 00:17:48.623 killing process with pid 88634 00:17:48.623 06:35:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:48.623 06:35:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:48.623 06:35:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88634' 00:17:48.623 06:35:41 -- common/autotest_common.sh@945 -- # kill 88634 00:17:48.623 06:35:41 -- common/autotest_common.sh@950 -- # wait 88634 00:17:48.623 06:35:41 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:48.623 06:35:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:48.623 06:35:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:48.623 06:35:41 -- common/autotest_common.sh@10 -- # set +x 00:17:48.623 06:35:41 -- nvmf/common.sh@469 -- # nvmfpid=88942 00:17:48.623 06:35:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:48.623 06:35:41 -- nvmf/common.sh@470 -- # waitforlisten 88942 00:17:48.623 06:35:41 -- common/autotest_common.sh@819 -- # '[' -z 88942 ']' 00:17:48.623 06:35:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.623 06:35:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.623 06:35:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.623 06:35:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.623 06:35:41 -- common/autotest_common.sh@10 -- # set +x 00:17:48.623 [2024-10-04 06:35:41.296187] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:48.623 [2024-10-04 06:35:41.296289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.881 [2024-10-04 06:35:41.431398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.881 [2024-10-04 06:35:41.486547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:48.881 [2024-10-04 06:35:41.486972] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.881 [2024-10-04 06:35:41.487022] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.881 [2024-10-04 06:35:41.487031] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.881 [2024-10-04 06:35:41.487070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.816 06:35:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.816 06:35:42 -- common/autotest_common.sh@852 -- # return 0 00:17:49.816 06:35:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:49.816 06:35:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:49.816 06:35:42 -- common/autotest_common.sh@10 -- # set +x 00:17:49.816 06:35:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.816 06:35:42 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.816 06:35:42 -- common/autotest_common.sh@640 -- # local es=0 00:17:49.816 06:35:42 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.816 06:35:42 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:49.816 06:35:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.816 06:35:42 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:49.816 06:35:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:49.816 06:35:42 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.816 06:35:42 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.816 06:35:42 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:50.074 [2024-10-04 06:35:42.550797] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.074 06:35:42 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:50.335 06:35:42 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:50.335 [2024-10-04 06:35:42.970899] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.335 [2024-10-04 06:35:42.971177] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.335 06:35:42 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.603 malloc0 00:17:50.603 06:35:43 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.870 06:35:43 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:51.129 [2024-10-04 06:35:43.669963] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:51.129 [2024-10-04 06:35:43.670009] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:51.129 [2024-10-04 06:35:43.670029] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:51.129 2024/10/04 06:35:43 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:51.129 request: 00:17:51.129 { 00:17:51.129 "method": "nvmf_subsystem_add_host", 00:17:51.129 "params": { 00:17:51.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.129 "host": "nqn.2016-06.io.spdk:host1", 00:17:51.129 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:51.129 } 00:17:51.129 } 00:17:51.129 Got JSON-RPC error response 00:17:51.129 GoRPCClient: error on JSON-RPC call 00:17:51.129 06:35:43 -- common/autotest_common.sh@643 -- # es=1 00:17:51.129 06:35:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:51.129 06:35:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:51.129 06:35:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:51.129 06:35:43 -- target/tls.sh@189 -- # killprocess 88942 00:17:51.129 06:35:43 -- common/autotest_common.sh@926 -- # '[' -z 88942 ']' 00:17:51.129 06:35:43 -- common/autotest_common.sh@930 -- # kill -0 88942 00:17:51.129 06:35:43 -- common/autotest_common.sh@931 -- # uname 00:17:51.129 06:35:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:51.129 06:35:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88942 00:17:51.129 06:35:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:51.129 06:35:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:51.129 killing process with pid 88942 00:17:51.129 06:35:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88942' 00:17:51.129 06:35:43 -- common/autotest_common.sh@945 -- # kill 88942 00:17:51.129 06:35:43 -- common/autotest_common.sh@950 -- # wait 88942 00:17:51.388 06:35:43 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:51.388 06:35:43 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:51.388 06:35:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.388 06:35:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:51.388 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 06:35:43 -- nvmf/common.sh@469 -- # nvmfpid=89054 00:17:51.388 06:35:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:51.388 06:35:43 -- nvmf/common.sh@470 -- # waitforlisten 89054 00:17:51.388 06:35:43 -- common/autotest_common.sh@819 -- # '[' -z 89054 ']' 00:17:51.388 06:35:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.388 06:35:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.388 06:35:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.388 06:35:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.388 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:17:51.388 [2024-10-04 06:35:44.003051] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:51.388 [2024-10-04 06:35:44.003149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.652 [2024-10-04 06:35:44.142212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.652 [2024-10-04 06:35:44.222242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.652 [2024-10-04 06:35:44.222384] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.652 [2024-10-04 06:35:44.222397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.652 [2024-10-04 06:35:44.222405] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.652 [2024-10-04 06:35:44.222432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.592 06:35:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.592 06:35:44 -- common/autotest_common.sh@852 -- # return 0 00:17:52.592 06:35:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.592 06:35:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:52.592 06:35:44 -- common/autotest_common.sh@10 -- # set +x 00:17:52.592 06:35:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.592 06:35:45 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:52.592 06:35:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:52.592 06:35:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:52.851 [2024-10-04 06:35:45.287201] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.851 06:35:45 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:52.851 06:35:45 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.109 [2024-10-04 06:35:45.755291] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.109 [2024-10-04 06:35:45.755542] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.109 06:35:45 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.367 malloc0 00:17:53.367 06:35:45 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.625 06:35:46 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:53.883 06:35:46 -- target/tls.sh@197 -- # bdevperf_pid=89151 00:17:53.883 06:35:46 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.883 06:35:46 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.883 06:35:46 -- target/tls.sh@200 -- # waitforlisten 89151 /var/tmp/bdevperf.sock 00:17:53.883 06:35:46 -- common/autotest_common.sh@819 -- # '[' -z 89151 ']' 00:17:53.883 06:35:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.883 06:35:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:53.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.883 06:35:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.883 06:35:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:53.883 06:35:46 -- common/autotest_common.sh@10 -- # set +x 00:17:53.883 [2024-10-04 06:35:46.495893] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:53.883 [2024-10-04 06:35:46.496018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89151 ] 00:17:54.142 [2024-10-04 06:35:46.622648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.142 [2024-10-04 06:35:46.693520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.076 06:35:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.076 06:35:47 -- common/autotest_common.sh@852 -- # return 0 00:17:55.076 06:35:47 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:55.076 [2024-10-04 06:35:47.671424] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.076 TLSTESTn1 00:17:55.335 06:35:47 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:55.594 06:35:48 -- target/tls.sh@205 -- # tgtconf='{ 00:17:55.594 "subsystems": [ 00:17:55.594 { 00:17:55.594 "subsystem": "iobuf", 00:17:55.594 "config": [ 00:17:55.594 { 00:17:55.594 "method": "iobuf_set_options", 00:17:55.594 "params": { 00:17:55.594 "large_bufsize": 135168, 00:17:55.594 "large_pool_count": 1024, 00:17:55.594 "small_bufsize": 8192, 00:17:55.594 "small_pool_count": 8192 00:17:55.594 } 00:17:55.594 } 00:17:55.594 ] 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "subsystem": "sock", 00:17:55.594 "config": [ 00:17:55.594 { 00:17:55.594 "method": "sock_impl_set_options", 00:17:55.594 "params": { 00:17:55.594 "enable_ktls": false, 00:17:55.594 "enable_placement_id": 0, 00:17:55.594 "enable_quickack": false, 00:17:55.594 "enable_recv_pipe": true, 00:17:55.594 "enable_zerocopy_send_client": false, 00:17:55.594 "enable_zerocopy_send_server": true, 00:17:55.594 "impl_name": "posix", 00:17:55.594 "recv_buf_size": 2097152, 00:17:55.594 "send_buf_size": 2097152, 00:17:55.594 "tls_version": 0, 00:17:55.594 "zerocopy_threshold": 0 00:17:55.594 } 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "method": "sock_impl_set_options", 00:17:55.594 "params": { 00:17:55.594 "enable_ktls": false, 00:17:55.594 "enable_placement_id": 0, 00:17:55.594 "enable_quickack": false, 00:17:55.594 "enable_recv_pipe": true, 00:17:55.594 "enable_zerocopy_send_client": false, 00:17:55.594 "enable_zerocopy_send_server": true, 00:17:55.594 "impl_name": "ssl", 00:17:55.594 "recv_buf_size": 4096, 00:17:55.594 "send_buf_size": 4096, 00:17:55.594 "tls_version": 0, 00:17:55.594 "zerocopy_threshold": 0 00:17:55.594 } 00:17:55.594 } 00:17:55.594 ] 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "subsystem": "vmd", 00:17:55.594 "config": [] 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "subsystem": "accel", 00:17:55.594 "config": [ 00:17:55.594 { 00:17:55.594 "method": "accel_set_options", 00:17:55.594 "params": { 00:17:55.594 "buf_count": 2048, 00:17:55.594 "large_cache_size": 16, 00:17:55.594 "sequence_count": 2048, 00:17:55.594 "small_cache_size": 128, 00:17:55.594 "task_count": 2048 00:17:55.594 } 00:17:55.594 } 00:17:55.594 ] 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "subsystem": "bdev", 00:17:55.594 "config": [ 00:17:55.594 { 00:17:55.594 "method": "bdev_set_options", 00:17:55.594 "params": { 00:17:55.594 "bdev_auto_examine": true, 00:17:55.594 "bdev_io_cache_size": 256, 00:17:55.594 "bdev_io_pool_size": 65535, 00:17:55.594 "iobuf_large_cache_size": 16, 00:17:55.594 "iobuf_small_cache_size": 128 00:17:55.594 } 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "method": "bdev_raid_set_options", 00:17:55.594 "params": { 00:17:55.594 "process_window_size_kb": 1024 00:17:55.594 } 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "method": "bdev_iscsi_set_options", 00:17:55.594 "params": { 00:17:55.594 "timeout_sec": 30 00:17:55.594 } 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "method": "bdev_nvme_set_options", 00:17:55.594 "params": { 00:17:55.594 "action_on_timeout": "none", 00:17:55.594 "allow_accel_sequence": false, 00:17:55.594 "arbitration_burst": 0, 00:17:55.594 "bdev_retry_count": 3, 00:17:55.594 "ctrlr_loss_timeout_sec": 0, 00:17:55.594 "delay_cmd_submit": true, 00:17:55.594 "fast_io_fail_timeout_sec": 0, 00:17:55.594 "generate_uuids": false, 00:17:55.594 "high_priority_weight": 0, 00:17:55.594 "io_path_stat": false, 00:17:55.594 "io_queue_requests": 0, 00:17:55.594 "keep_alive_timeout_ms": 10000, 00:17:55.594 "low_priority_weight": 0, 00:17:55.594 "medium_priority_weight": 0, 00:17:55.594 "nvme_adminq_poll_period_us": 10000, 00:17:55.594 "nvme_ioq_poll_period_us": 0, 00:17:55.594 "reconnect_delay_sec": 0, 00:17:55.594 "timeout_admin_us": 0, 00:17:55.594 "timeout_us": 0, 00:17:55.594 "transport_ack_timeout": 0, 00:17:55.594 "transport_retry_count": 4, 00:17:55.594 "transport_tos": 0 00:17:55.594 } 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "method": "bdev_nvme_set_hotplug", 00:17:55.594 "params": { 00:17:55.594 "enable": false, 00:17:55.594 "period_us": 100000 00:17:55.594 } 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "method": "bdev_malloc_create", 00:17:55.594 "params": { 00:17:55.594 "block_size": 4096, 00:17:55.594 "name": "malloc0", 00:17:55.594 "num_blocks": 8192, 00:17:55.594 "optimal_io_boundary": 0, 00:17:55.594 "physical_block_size": 4096, 00:17:55.594 "uuid": "94ff480c-de8d-496d-8134-98d427f092e0" 00:17:55.594 } 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "method": "bdev_wait_for_examine" 00:17:55.594 } 00:17:55.594 ] 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "subsystem": "nbd", 00:17:55.594 "config": [] 00:17:55.594 }, 00:17:55.594 { 00:17:55.594 "subsystem": "scheduler", 00:17:55.595 "config": [ 00:17:55.595 { 00:17:55.595 "method": "framework_set_scheduler", 00:17:55.595 "params": { 00:17:55.595 "name": "static" 00:17:55.595 } 00:17:55.595 } 00:17:55.595 ] 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "subsystem": "nvmf", 00:17:55.595 "config": [ 00:17:55.595 { 00:17:55.595 "method": "nvmf_set_config", 00:17:55.595 "params": { 00:17:55.595 "admin_cmd_passthru": { 00:17:55.595 "identify_ctrlr": false 00:17:55.595 }, 00:17:55.595 "discovery_filter": "match_any" 00:17:55.595 } 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "method": "nvmf_set_max_subsystems", 00:17:55.595 "params": { 00:17:55.595 "max_subsystems": 1024 00:17:55.595 } 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "method": "nvmf_set_crdt", 00:17:55.595 "params": { 00:17:55.595 "crdt1": 0, 00:17:55.595 "crdt2": 0, 00:17:55.595 "crdt3": 0 00:17:55.595 } 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "method": "nvmf_create_transport", 00:17:55.595 "params": { 00:17:55.595 "abort_timeout_sec": 1, 00:17:55.595 "buf_cache_size": 4294967295, 00:17:55.595 "c2h_success": false, 00:17:55.595 "dif_insert_or_strip": false, 00:17:55.595 "in_capsule_data_size": 4096, 00:17:55.595 "io_unit_size": 131072, 00:17:55.595 "max_aq_depth": 128, 00:17:55.595 "max_io_qpairs_per_ctrlr": 127, 00:17:55.595 "max_io_size": 131072, 00:17:55.595 "max_queue_depth": 128, 00:17:55.595 "num_shared_buffers": 511, 00:17:55.595 "sock_priority": 0, 00:17:55.595 "trtype": "TCP", 00:17:55.595 "zcopy": false 00:17:55.595 } 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "method": "nvmf_create_subsystem", 00:17:55.595 "params": { 00:17:55.595 "allow_any_host": false, 00:17:55.595 "ana_reporting": false, 00:17:55.595 "max_cntlid": 65519, 00:17:55.595 "max_namespaces": 10, 00:17:55.595 "min_cntlid": 1, 00:17:55.595 "model_number": "SPDK bdev Controller", 00:17:55.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.595 "serial_number": "SPDK00000000000001" 00:17:55.595 } 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "method": "nvmf_subsystem_add_host", 00:17:55.595 "params": { 00:17:55.595 "host": "nqn.2016-06.io.spdk:host1", 00:17:55.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.595 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:55.595 } 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "method": "nvmf_subsystem_add_ns", 00:17:55.595 "params": { 00:17:55.595 "namespace": { 00:17:55.595 "bdev_name": "malloc0", 00:17:55.595 "nguid": "94FF480CDE8D496D813498D427F092E0", 00:17:55.595 "nsid": 1, 00:17:55.595 "uuid": "94ff480c-de8d-496d-8134-98d427f092e0" 00:17:55.595 }, 00:17:55.595 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:55.595 } 00:17:55.595 }, 00:17:55.595 { 00:17:55.595 "method": "nvmf_subsystem_add_listener", 00:17:55.595 "params": { 00:17:55.595 "listen_address": { 00:17:55.595 "adrfam": "IPv4", 00:17:55.595 "traddr": "10.0.0.2", 00:17:55.595 "trsvcid": "4420", 00:17:55.595 "trtype": "TCP" 00:17:55.595 }, 00:17:55.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.595 "secure_channel": true 00:17:55.595 } 00:17:55.595 } 00:17:55.595 ] 00:17:55.595 } 00:17:55.595 ] 00:17:55.595 }' 00:17:55.595 06:35:48 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:55.854 06:35:48 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:55.854 "subsystems": [ 00:17:55.854 { 00:17:55.854 "subsystem": "iobuf", 00:17:55.854 "config": [ 00:17:55.854 { 00:17:55.854 "method": "iobuf_set_options", 00:17:55.854 "params": { 00:17:55.854 "large_bufsize": 135168, 00:17:55.854 "large_pool_count": 1024, 00:17:55.854 "small_bufsize": 8192, 00:17:55.854 "small_pool_count": 8192 00:17:55.854 } 00:17:55.854 } 00:17:55.854 ] 00:17:55.854 }, 00:17:55.854 { 00:17:55.854 "subsystem": "sock", 00:17:55.854 "config": [ 00:17:55.854 { 00:17:55.854 "method": "sock_impl_set_options", 00:17:55.854 "params": { 00:17:55.854 "enable_ktls": false, 00:17:55.854 "enable_placement_id": 0, 00:17:55.854 "enable_quickack": false, 00:17:55.854 "enable_recv_pipe": true, 00:17:55.854 "enable_zerocopy_send_client": false, 00:17:55.854 "enable_zerocopy_send_server": true, 00:17:55.854 "impl_name": "posix", 00:17:55.854 "recv_buf_size": 2097152, 00:17:55.854 "send_buf_size": 2097152, 00:17:55.854 "tls_version": 0, 00:17:55.854 "zerocopy_threshold": 0 00:17:55.854 } 00:17:55.854 }, 00:17:55.854 { 00:17:55.854 "method": "sock_impl_set_options", 00:17:55.854 "params": { 00:17:55.854 "enable_ktls": false, 00:17:55.854 "enable_placement_id": 0, 00:17:55.854 "enable_quickack": false, 00:17:55.854 "enable_recv_pipe": true, 00:17:55.854 "enable_zerocopy_send_client": false, 00:17:55.854 "enable_zerocopy_send_server": true, 00:17:55.854 "impl_name": "ssl", 00:17:55.854 "recv_buf_size": 4096, 00:17:55.854 "send_buf_size": 4096, 00:17:55.854 "tls_version": 0, 00:17:55.854 "zerocopy_threshold": 0 00:17:55.854 } 00:17:55.854 } 00:17:55.854 ] 00:17:55.854 }, 00:17:55.854 { 00:17:55.854 "subsystem": "vmd", 00:17:55.854 "config": [] 00:17:55.854 }, 00:17:55.854 { 00:17:55.854 "subsystem": "accel", 00:17:55.854 "config": [ 00:17:55.854 { 00:17:55.854 "method": "accel_set_options", 00:17:55.854 "params": { 00:17:55.854 "buf_count": 2048, 00:17:55.854 "large_cache_size": 16, 00:17:55.854 "sequence_count": 2048, 00:17:55.854 "small_cache_size": 128, 00:17:55.854 "task_count": 2048 00:17:55.854 } 00:17:55.854 } 00:17:55.854 ] 00:17:55.854 }, 00:17:55.854 { 00:17:55.854 "subsystem": "bdev", 00:17:55.854 "config": [ 00:17:55.854 { 00:17:55.854 "method": "bdev_set_options", 00:17:55.854 "params": { 00:17:55.854 "bdev_auto_examine": true, 00:17:55.854 "bdev_io_cache_size": 256, 00:17:55.854 "bdev_io_pool_size": 65535, 00:17:55.854 "iobuf_large_cache_size": 16, 00:17:55.854 "iobuf_small_cache_size": 128 00:17:55.854 } 00:17:55.854 }, 00:17:55.854 { 00:17:55.854 "method": "bdev_raid_set_options", 00:17:55.854 "params": { 00:17:55.854 "process_window_size_kb": 1024 00:17:55.854 } 00:17:55.854 }, 00:17:55.854 { 00:17:55.854 "method": "bdev_iscsi_set_options", 00:17:55.855 "params": { 00:17:55.855 "timeout_sec": 30 00:17:55.855 } 00:17:55.855 }, 00:17:55.855 { 00:17:55.855 "method": "bdev_nvme_set_options", 00:17:55.855 "params": { 00:17:55.855 "action_on_timeout": "none", 00:17:55.855 "allow_accel_sequence": false, 00:17:55.855 "arbitration_burst": 0, 00:17:55.855 "bdev_retry_count": 3, 00:17:55.855 "ctrlr_loss_timeout_sec": 0, 00:17:55.855 "delay_cmd_submit": true, 00:17:55.855 "fast_io_fail_timeout_sec": 0, 00:17:55.855 "generate_uuids": false, 00:17:55.855 "high_priority_weight": 0, 00:17:55.855 "io_path_stat": false, 00:17:55.855 "io_queue_requests": 512, 00:17:55.855 "keep_alive_timeout_ms": 10000, 00:17:55.855 "low_priority_weight": 0, 00:17:55.855 "medium_priority_weight": 0, 00:17:55.855 "nvme_adminq_poll_period_us": 10000, 00:17:55.855 "nvme_ioq_poll_period_us": 0, 00:17:55.855 "reconnect_delay_sec": 0, 00:17:55.855 "timeout_admin_us": 0, 00:17:55.855 "timeout_us": 0, 00:17:55.855 "transport_ack_timeout": 0, 00:17:55.855 "transport_retry_count": 4, 00:17:55.855 "transport_tos": 0 00:17:55.855 } 00:17:55.855 }, 00:17:55.855 { 00:17:55.855 "method": "bdev_nvme_attach_controller", 00:17:55.855 "params": { 00:17:55.855 "adrfam": "IPv4", 00:17:55.855 "ctrlr_loss_timeout_sec": 0, 00:17:55.855 "ddgst": false, 00:17:55.855 "fast_io_fail_timeout_sec": 0, 00:17:55.855 "hdgst": false, 00:17:55.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.855 "name": "TLSTEST", 00:17:55.855 "prchk_guard": false, 00:17:55.855 "prchk_reftag": false, 00:17:55.855 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:55.855 "reconnect_delay_sec": 0, 00:17:55.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.855 "traddr": "10.0.0.2", 00:17:55.855 "trsvcid": "4420", 00:17:55.855 "trtype": "TCP" 00:17:55.855 } 00:17:55.855 }, 00:17:55.855 { 00:17:55.855 "method": "bdev_nvme_set_hotplug", 00:17:55.855 "params": { 00:17:55.855 "enable": false, 00:17:55.855 "period_us": 100000 00:17:55.855 } 00:17:55.855 }, 00:17:55.855 { 00:17:55.855 "method": "bdev_wait_for_examine" 00:17:55.855 } 00:17:55.855 ] 00:17:55.855 }, 00:17:55.855 { 00:17:55.855 "subsystem": "nbd", 00:17:55.855 "config": [] 00:17:55.855 } 00:17:55.855 ] 00:17:55.855 }' 00:17:55.855 06:35:48 -- target/tls.sh@208 -- # killprocess 89151 00:17:55.855 06:35:48 -- common/autotest_common.sh@926 -- # '[' -z 89151 ']' 00:17:55.855 06:35:48 -- common/autotest_common.sh@930 -- # kill -0 89151 00:17:55.855 06:35:48 -- common/autotest_common.sh@931 -- # uname 00:17:55.855 06:35:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:55.855 06:35:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89151 00:17:55.855 06:35:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:55.855 06:35:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:55.855 killing process with pid 89151 00:17:55.855 06:35:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89151' 00:17:55.855 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.855 00:17:55.855 Latency(us) 00:17:55.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.855 =================================================================================================================== 00:17:55.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.855 06:35:48 -- common/autotest_common.sh@945 -- # kill 89151 00:17:55.855 06:35:48 -- common/autotest_common.sh@950 -- # wait 89151 00:17:56.115 06:35:48 -- target/tls.sh@209 -- # killprocess 89054 00:17:56.115 06:35:48 -- common/autotest_common.sh@926 -- # '[' -z 89054 ']' 00:17:56.115 06:35:48 -- common/autotest_common.sh@930 -- # kill -0 89054 00:17:56.115 06:35:48 -- common/autotest_common.sh@931 -- # uname 00:17:56.115 06:35:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:56.115 06:35:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89054 00:17:56.115 06:35:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:56.115 06:35:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:56.115 killing process with pid 89054 00:17:56.115 06:35:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89054' 00:17:56.115 06:35:48 -- common/autotest_common.sh@945 -- # kill 89054 00:17:56.115 06:35:48 -- common/autotest_common.sh@950 -- # wait 89054 00:17:56.374 06:35:48 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:56.374 06:35:48 -- target/tls.sh@212 -- # echo '{ 00:17:56.374 "subsystems": [ 00:17:56.374 { 00:17:56.374 "subsystem": "iobuf", 00:17:56.374 "config": [ 00:17:56.374 { 00:17:56.374 "method": "iobuf_set_options", 00:17:56.374 "params": { 00:17:56.374 "large_bufsize": 135168, 00:17:56.374 "large_pool_count": 1024, 00:17:56.374 "small_bufsize": 8192, 00:17:56.374 "small_pool_count": 8192 00:17:56.374 } 00:17:56.374 } 00:17:56.374 ] 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "subsystem": "sock", 00:17:56.374 "config": [ 00:17:56.374 { 00:17:56.374 "method": "sock_impl_set_options", 00:17:56.374 "params": { 00:17:56.374 "enable_ktls": false, 00:17:56.374 "enable_placement_id": 0, 00:17:56.374 "enable_quickack": false, 00:17:56.374 "enable_recv_pipe": true, 00:17:56.374 "enable_zerocopy_send_client": false, 00:17:56.374 "enable_zerocopy_send_server": true, 00:17:56.374 "impl_name": "posix", 00:17:56.374 "recv_buf_size": 2097152, 00:17:56.374 "send_buf_size": 2097152, 00:17:56.374 "tls_version": 0, 00:17:56.374 "zerocopy_threshold": 0 00:17:56.374 } 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "method": "sock_impl_set_options", 00:17:56.374 "params": { 00:17:56.374 "enable_ktls": false, 00:17:56.374 "enable_placement_id": 0, 00:17:56.374 "enable_quickack": false, 00:17:56.374 "enable_recv_pipe": true, 00:17:56.374 "enable_zerocopy_send_client": false, 00:17:56.374 "enable_zerocopy_send_server": true, 00:17:56.374 "impl_name": "ssl", 00:17:56.374 "recv_buf_size": 4096, 00:17:56.374 "send_buf_size": 4096, 00:17:56.374 "tls_version": 0, 00:17:56.374 "zerocopy_threshold": 0 00:17:56.374 } 00:17:56.374 } 00:17:56.374 ] 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "subsystem": "vmd", 00:17:56.374 "config": [] 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "subsystem": "accel", 00:17:56.374 "config": [ 00:17:56.374 { 00:17:56.374 "method": "accel_set_options", 00:17:56.374 "params": { 00:17:56.374 "buf_count": 2048, 00:17:56.374 "large_cache_size": 16, 00:17:56.374 "sequence_count": 2048, 00:17:56.374 "small_cache_size": 128, 00:17:56.374 "task_count": 2048 00:17:56.374 } 00:17:56.374 } 00:17:56.374 ] 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "subsystem": "bdev", 00:17:56.374 "config": [ 00:17:56.374 { 00:17:56.374 "method": "bdev_set_options", 00:17:56.374 "params": { 00:17:56.374 "bdev_auto_examine": true, 00:17:56.374 "bdev_io_cache_size": 256, 00:17:56.374 "bdev_io_pool_size": 65535, 00:17:56.374 "iobuf_large_cache_size": 16, 00:17:56.374 "iobuf_small_cache_size": 128 00:17:56.374 } 00:17:56.374 }, 00:17:56.374 { 00:17:56.374 "method": "bdev_raid_set_options", 00:17:56.374 "params": { 00:17:56.374 "process_window_size_kb": 1024 00:17:56.374 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "bdev_iscsi_set_options", 00:17:56.375 "params": { 00:17:56.375 "timeout_sec": 30 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "bdev_nvme_set_options", 00:17:56.375 "params": { 00:17:56.375 "action_on_timeout": "none", 00:17:56.375 "allow_accel_sequence": false, 00:17:56.375 "arbitration_burst": 0, 00:17:56.375 "bdev_retry_count": 3, 00:17:56.375 "ctrlr_loss_timeout_sec": 0, 00:17:56.375 "delay_cmd_submit": true, 00:17:56.375 "fast_io_fail_timeout_sec": 0, 00:17:56.375 "generate_uuids": false, 00:17:56.375 "high_priority_weight": 0, 00:17:56.375 "io_path_stat": false, 00:17:56.375 "io_queue_requests": 0, 00:17:56.375 "keep_alive_timeout_ms": 10000, 00:17:56.375 "low_priority_weight": 0, 00:17:56.375 "medium_priority_weight": 0, 00:17:56.375 "nvme_adminq_poll_period_us": 10000, 00:17:56.375 "nvme_ioq_poll_period_us": 0, 00:17:56.375 "reconnect_delay_sec": 0, 00:17:56.375 "timeout_admin_us": 0, 00:17:56.375 "timeout_us": 0, 00:17:56.375 "transport_ack_timeout": 0, 00:17:56.375 "transport_retry_count": 4, 00:17:56.375 "transport_tos": 0 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "bdev_nvme_set_hotplug", 00:17:56.375 "params": { 00:17:56.375 "enable": false, 00:17:56.375 "period_us": 100000 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "bdev_malloc_create", 00:17:56.375 "params": { 00:17:56.375 "block_size": 4096, 00:17:56.375 "name": "malloc0", 00:17:56.375 "num_blocks": 8192, 00:17:56.375 "optimal_io_boundary": 0, 00:17:56.375 "physical_block_size": 4096, 00:17:56.375 "uuid": "94ff480c-de8d-496d-8134-98d427f092e0" 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "bdev_wait_for_examine" 00:17:56.375 } 00:17:56.375 ] 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "subsystem": "nbd", 00:17:56.375 "config": [] 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "subsystem": "scheduler", 00:17:56.375 "config": [ 00:17:56.375 { 00:17:56.375 "method": "framework_set_scheduler", 00:17:56.375 "params": { 00:17:56.375 "name": "static" 00:17:56.375 } 00:17:56.375 } 00:17:56.375 ] 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "subsystem": "nvmf", 00:17:56.375 "config": [ 00:17:56.375 { 00:17:56.375 "method": "nvmf_set_config", 00:17:56.375 "params": { 00:17:56.375 "admin_cmd_passthru": { 00:17:56.375 "identify_ctrlr": false 00:17:56.375 }, 00:17:56.375 "discovery_filter": "match_any" 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "nvmf_set_max_subsystems", 00:17:56.375 "params": { 00:17:56.375 "max_subsystems": 1024 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "nvmf_set_crdt", 00:17:56.375 "params": { 00:17:56.375 "crdt1": 0, 00:17:56.375 "crdt2": 0, 00:17:56.375 "crdt3": 0 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "nvmf_create_transport", 00:17:56.375 "params": { 00:17:56.375 "abort_timeout_sec": 1, 00:17:56.375 "buf_cache_size": 4294967295, 00:17:56.375 "c2h_success": false, 00:17:56.375 "dif_insert_or_strip": false, 00:17:56.375 "in_capsule_data_size": 4096, 00:17:56.375 "io_unit_size": 131072, 00:17:56.375 "max_aq_depth": 128, 00:17:56.375 "max_io_qpairs_per_ctrlr": 127, 00:17:56.375 "max_io_size": 131072, 00:17:56.375 "max_queue_depth": 128, 00:17:56.375 "num_shared_buffers": 511, 00:17:56.375 "sock_priority": 0, 00:17:56.375 "trtype": "TCP", 00:17:56.375 "zcopy": false 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "nvmf_create_subsystem", 00:17:56.375 "params": { 00:17:56.375 "allow_any_host": false, 00:17:56.375 "ana_reporting": false, 00:17:56.375 "max_cntlid": 65519, 00:17:56.375 "max_namespaces": 10, 00:17:56.375 "min_cntlid": 1, 00:17:56.375 "model_number": "SPDK bdev Controller", 00:17:56.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.375 "serial_number": "SPDK00000000000001" 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "nvmf_subsystem_add_host", 00:17:56.375 "params": { 00:17:56.375 "host": "nqn.2016-06.io.spdk:host1", 00:17:56.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.375 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "nvmf_subsystem_add_ns", 00:17:56.375 "params": { 00:17:56.375 "namespace": { 00:17:56.375 "bdev_name": "malloc0", 00:17:56.375 "nguid": "94FF480CDE8D496D813498D427F092E0", 00:17:56.375 "nsid": 1, 00:17:56.375 "uuid": "94ff480c-de8d-496d-8134-98d427f092e0" 00:17:56.375 }, 00:17:56.375 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:56.375 } 00:17:56.375 }, 00:17:56.375 { 00:17:56.375 "method": "nvmf_subsystem_add_listener", 00:17:56.375 "params": { 00:17:56.375 "listen_address": { 00:17:56.375 "adrfam": "IPv4", 00:17:56.375 "traddr": "10.0.0.2", 00:17:56.375 "trsvcid": "4420", 00:17:56.375 "trtype": "TCP" 00:17:56.375 }, 00:17:56.375 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.375 "secure_channel": true 00:17:56.375 } 00:17:56.375 } 00:17:56.375 ] 00:17:56.375 } 00:17:56.375 ] 00:17:56.375 }' 00:17:56.375 06:35:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:56.375 06:35:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:56.375 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:17:56.375 06:35:48 -- nvmf/common.sh@469 -- # nvmfpid=89230 00:17:56.375 06:35:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:56.375 06:35:48 -- nvmf/common.sh@470 -- # waitforlisten 89230 00:17:56.375 06:35:48 -- common/autotest_common.sh@819 -- # '[' -z 89230 ']' 00:17:56.375 06:35:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.375 06:35:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:56.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.375 06:35:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.375 06:35:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:56.375 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:17:56.375 [2024-10-04 06:35:49.036233] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:56.375 [2024-10-04 06:35:49.036312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.634 [2024-10-04 06:35:49.171391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.634 [2024-10-04 06:35:49.240820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:56.634 [2024-10-04 06:35:49.240979] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.634 [2024-10-04 06:35:49.240991] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.634 [2024-10-04 06:35:49.240999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.634 [2024-10-04 06:35:49.241060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.893 [2024-10-04 06:35:49.459005] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.893 [2024-10-04 06:35:49.490958] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.893 [2024-10-04 06:35:49.491195] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.462 06:35:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:57.462 06:35:49 -- common/autotest_common.sh@852 -- # return 0 00:17:57.462 06:35:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:57.462 06:35:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:57.462 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:17:57.462 06:35:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.462 06:35:49 -- target/tls.sh@216 -- # bdevperf_pid=89274 00:17:57.462 06:35:49 -- target/tls.sh@217 -- # waitforlisten 89274 /var/tmp/bdevperf.sock 00:17:57.462 06:35:49 -- common/autotest_common.sh@819 -- # '[' -z 89274 ']' 00:17:57.462 06:35:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.462 06:35:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.462 06:35:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.462 06:35:49 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:57.462 06:35:49 -- target/tls.sh@213 -- # echo '{ 00:17:57.462 "subsystems": [ 00:17:57.462 { 00:17:57.462 "subsystem": "iobuf", 00:17:57.462 "config": [ 00:17:57.462 { 00:17:57.462 "method": "iobuf_set_options", 00:17:57.462 "params": { 00:17:57.462 "large_bufsize": 135168, 00:17:57.462 "large_pool_count": 1024, 00:17:57.462 "small_bufsize": 8192, 00:17:57.462 "small_pool_count": 8192 00:17:57.462 } 00:17:57.462 } 00:17:57.462 ] 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "subsystem": "sock", 00:17:57.462 "config": [ 00:17:57.462 { 00:17:57.462 "method": "sock_impl_set_options", 00:17:57.462 "params": { 00:17:57.462 "enable_ktls": false, 00:17:57.462 "enable_placement_id": 0, 00:17:57.462 "enable_quickack": false, 00:17:57.462 "enable_recv_pipe": true, 00:17:57.462 "enable_zerocopy_send_client": false, 00:17:57.462 "enable_zerocopy_send_server": true, 00:17:57.462 "impl_name": "posix", 00:17:57.462 "recv_buf_size": 2097152, 00:17:57.462 "send_buf_size": 2097152, 00:17:57.462 "tls_version": 0, 00:17:57.462 "zerocopy_threshold": 0 00:17:57.462 } 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "method": "sock_impl_set_options", 00:17:57.462 "params": { 00:17:57.462 "enable_ktls": false, 00:17:57.462 "enable_placement_id": 0, 00:17:57.462 "enable_quickack": false, 00:17:57.462 "enable_recv_pipe": true, 00:17:57.462 "enable_zerocopy_send_client": false, 00:17:57.462 "enable_zerocopy_send_server": true, 00:17:57.462 "impl_name": "ssl", 00:17:57.462 "recv_buf_size": 4096, 00:17:57.462 "send_buf_size": 4096, 00:17:57.462 "tls_version": 0, 00:17:57.462 "zerocopy_threshold": 0 00:17:57.462 } 00:17:57.462 } 00:17:57.462 ] 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "subsystem": "vmd", 00:17:57.462 "config": [] 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "subsystem": "accel", 00:17:57.462 "config": [ 00:17:57.462 { 00:17:57.462 "method": "accel_set_options", 00:17:57.462 "params": { 00:17:57.462 "buf_count": 2048, 00:17:57.462 "large_cache_size": 16, 00:17:57.462 "sequence_count": 2048, 00:17:57.462 "small_cache_size": 128, 00:17:57.462 "task_count": 2048 00:17:57.462 } 00:17:57.462 } 00:17:57.462 ] 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "subsystem": "bdev", 00:17:57.462 "config": [ 00:17:57.462 { 00:17:57.462 "method": "bdev_set_options", 00:17:57.462 "params": { 00:17:57.462 "bdev_auto_examine": true, 00:17:57.462 "bdev_io_cache_size": 256, 00:17:57.462 "bdev_io_pool_size": 65535, 00:17:57.462 "iobuf_large_cache_size": 16, 00:17:57.462 "iobuf_small_cache_size": 128 00:17:57.462 } 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "method": "bdev_raid_set_options", 00:17:57.462 "params": { 00:17:57.462 "process_window_size_kb": 1024 00:17:57.462 } 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "method": "bdev_iscsi_set_options", 00:17:57.462 "params": { 00:17:57.462 "timeout_sec": 30 00:17:57.462 } 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "method": "bdev_nvme_set_options", 00:17:57.462 "params": { 00:17:57.462 "action_on_timeout": "none", 00:17:57.462 "allow_accel_sequence": false, 00:17:57.462 "arbitration_burst": 0, 00:17:57.462 "bdev_retry_count": 3, 00:17:57.462 "ctrlr_loss_timeout_sec": 0, 00:17:57.462 "delay_cmd_submit": true, 00:17:57.462 "fast_io_fail_timeout_sec": 0, 00:17:57.462 "generate_uuids": false, 00:17:57.462 "high_priority_weight": 0, 00:17:57.462 "io_path_stat": false, 00:17:57.462 "io_queue_requests": 512, 00:17:57.462 "keep_alive_timeout_ms": 10000, 00:17:57.462 "low_priority_weight": 0, 00:17:57.462 "medium_priority_weight": 0, 00:17:57.462 "nvme_adminq_poll_period_us": 10000, 00:17:57.462 "nvme_ioq_poll_period_us": 0, 00:17:57.462 "reconnect_delay_sec": 0, 00:17:57.462 "timeout_admin_us": 0, 00:17:57.462 "timeout_us": 0, 00:17:57.462 "transport_ack_timeout": 0, 00:17:57.462 "transport_retry_count": 4, 00:17:57.462 "transport_tos": 0 00:17:57.462 } 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "method": "bdev_nvme_attach_controller", 00:17:57.462 "params": { 00:17:57.462 "adrfam": "IPv4", 00:17:57.462 "ctrlr_loss_timeout_sec": 0, 00:17:57.462 "ddgst": false, 00:17:57.462 "fast_io_fail_timeout_sec": 0, 00:17:57.462 "hdgst": false, 00:17:57.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.462 "name": "TLSTEST", 00:17:57.462 "prchk_guard": false, 00:17:57.462 "prchk_reftag": false, 00:17:57.462 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:57.462 "reconnect_delay_sec": 0, 00:17:57.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.462 "traddr": "10.0.0.2", 00:17:57.462 "trsvcid": "4420", 00:17:57.462 "trtype": "TCP" 00:17:57.462 } 00:17:57.462 }, 00:17:57.462 { 00:17:57.462 "method": "bdev_nvme_set_hotplug", 00:17:57.462 "params": { 00:17:57.462 "enable": false, 00:17:57.462 "period_us": 100000 00:17:57.462 } 00:17:57.463 }, 00:17:57.463 { 00:17:57.463 "method": "bdev_wait_for_examine" 00:17:57.463 } 00:17:57.463 ] 00:17:57.463 }, 00:17:57.463 { 00:17:57.463 "subsystem": "nbd", 00:17:57.463 "config": [] 00:17:57.463 } 00:17:57.463 ] 00:17:57.463 }' 00:17:57.463 06:35:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.463 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:17:57.463 [2024-10-04 06:35:50.005673] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:17:57.463 [2024-10-04 06:35:50.005791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89274 ] 00:17:57.721 [2024-10-04 06:35:50.147770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.721 [2024-10-04 06:35:50.226470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.980 [2024-10-04 06:35:50.409037] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.547 06:35:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.547 06:35:50 -- common/autotest_common.sh@852 -- # return 0 00:17:58.547 06:35:50 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.547 Running I/O for 10 seconds... 00:18:08.518 00:18:08.518 Latency(us) 00:18:08.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.518 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.518 Verification LBA range: start 0x0 length 0x2000 00:18:08.518 TLSTESTn1 : 10.01 5625.84 21.98 0.00 0.00 22718.81 4527.94 28955.00 00:18:08.518 =================================================================================================================== 00:18:08.518 Total : 5625.84 21.98 0.00 0.00 22718.81 4527.94 28955.00 00:18:08.518 0 00:18:08.518 06:36:01 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.518 06:36:01 -- target/tls.sh@223 -- # killprocess 89274 00:18:08.518 06:36:01 -- common/autotest_common.sh@926 -- # '[' -z 89274 ']' 00:18:08.518 06:36:01 -- common/autotest_common.sh@930 -- # kill -0 89274 00:18:08.518 06:36:01 -- common/autotest_common.sh@931 -- # uname 00:18:08.518 06:36:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:08.518 06:36:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89274 00:18:08.518 06:36:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:08.518 06:36:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:08.518 killing process with pid 89274 00:18:08.518 06:36:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89274' 00:18:08.518 06:36:01 -- common/autotest_common.sh@945 -- # kill 89274 00:18:08.518 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.518 00:18:08.518 Latency(us) 00:18:08.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.518 =================================================================================================================== 00:18:08.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.518 06:36:01 -- common/autotest_common.sh@950 -- # wait 89274 00:18:08.778 06:36:01 -- target/tls.sh@224 -- # killprocess 89230 00:18:08.778 06:36:01 -- common/autotest_common.sh@926 -- # '[' -z 89230 ']' 00:18:08.778 06:36:01 -- common/autotest_common.sh@930 -- # kill -0 89230 00:18:08.778 06:36:01 -- common/autotest_common.sh@931 -- # uname 00:18:08.778 06:36:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:08.778 06:36:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89230 00:18:08.778 06:36:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:08.778 06:36:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:08.778 killing process with pid 89230 00:18:08.778 06:36:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89230' 00:18:08.778 06:36:01 -- common/autotest_common.sh@945 -- # kill 89230 00:18:08.778 06:36:01 -- common/autotest_common.sh@950 -- # wait 89230 00:18:09.089 06:36:01 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:09.089 06:36:01 -- target/tls.sh@227 -- # cleanup 00:18:09.089 06:36:01 -- target/tls.sh@15 -- # process_shm --id 0 00:18:09.089 06:36:01 -- common/autotest_common.sh@796 -- # type=--id 00:18:09.089 06:36:01 -- common/autotest_common.sh@797 -- # id=0 00:18:09.089 06:36:01 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:09.089 06:36:01 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:09.089 06:36:01 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:09.089 06:36:01 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:09.089 06:36:01 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:09.089 06:36:01 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:09.089 nvmf_trace.0 00:18:09.089 06:36:01 -- common/autotest_common.sh@811 -- # return 0 00:18:09.089 06:36:01 -- target/tls.sh@16 -- # killprocess 89274 00:18:09.089 06:36:01 -- common/autotest_common.sh@926 -- # '[' -z 89274 ']' 00:18:09.089 06:36:01 -- common/autotest_common.sh@930 -- # kill -0 89274 00:18:09.089 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89274) - No such process 00:18:09.089 Process with pid 89274 is not found 00:18:09.089 06:36:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89274 is not found' 00:18:09.089 06:36:01 -- target/tls.sh@17 -- # nvmftestfini 00:18:09.089 06:36:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:09.089 06:36:01 -- nvmf/common.sh@116 -- # sync 00:18:09.348 06:36:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:09.348 06:36:01 -- nvmf/common.sh@119 -- # set +e 00:18:09.348 06:36:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:09.349 06:36:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:09.349 rmmod nvme_tcp 00:18:09.349 rmmod nvme_fabrics 00:18:09.349 rmmod nvme_keyring 00:18:09.349 06:36:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:09.349 06:36:01 -- nvmf/common.sh@123 -- # set -e 00:18:09.349 06:36:01 -- nvmf/common.sh@124 -- # return 0 00:18:09.349 06:36:01 -- nvmf/common.sh@477 -- # '[' -n 89230 ']' 00:18:09.349 06:36:01 -- nvmf/common.sh@478 -- # killprocess 89230 00:18:09.349 06:36:01 -- common/autotest_common.sh@926 -- # '[' -z 89230 ']' 00:18:09.349 06:36:01 -- common/autotest_common.sh@930 -- # kill -0 89230 00:18:09.349 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89230) - No such process 00:18:09.349 Process with pid 89230 is not found 00:18:09.349 06:36:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89230 is not found' 00:18:09.349 06:36:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:09.349 06:36:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:09.349 06:36:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:09.349 06:36:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.349 06:36:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:09.349 06:36:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.349 06:36:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.349 06:36:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.349 06:36:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:09.349 06:36:01 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:09.349 00:18:09.349 real 1m11.785s 00:18:09.349 user 1m47.250s 00:18:09.349 sys 0m27.581s 00:18:09.349 06:36:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.349 06:36:01 -- common/autotest_common.sh@10 -- # set +x 00:18:09.349 ************************************ 00:18:09.349 END TEST nvmf_tls 00:18:09.349 ************************************ 00:18:09.349 06:36:01 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:09.349 06:36:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:09.349 06:36:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.349 06:36:01 -- common/autotest_common.sh@10 -- # set +x 00:18:09.349 ************************************ 00:18:09.349 START TEST nvmf_fips 00:18:09.349 ************************************ 00:18:09.349 06:36:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:09.349 * Looking for test storage... 00:18:09.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:09.349 06:36:02 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.349 06:36:02 -- nvmf/common.sh@7 -- # uname -s 00:18:09.349 06:36:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.349 06:36:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.349 06:36:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.349 06:36:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.349 06:36:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.349 06:36:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.349 06:36:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.349 06:36:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.349 06:36:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.349 06:36:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.349 06:36:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:18:09.349 06:36:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:18:09.349 06:36:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.609 06:36:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.609 06:36:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.609 06:36:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.609 06:36:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.609 06:36:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.609 06:36:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.609 06:36:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.609 06:36:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.609 06:36:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.609 06:36:02 -- paths/export.sh@5 -- # export PATH 00:18:09.609 06:36:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.609 06:36:02 -- nvmf/common.sh@46 -- # : 0 00:18:09.609 06:36:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:09.609 06:36:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:09.609 06:36:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:09.609 06:36:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.609 06:36:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.609 06:36:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:09.609 06:36:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:09.609 06:36:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:09.609 06:36:02 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.609 06:36:02 -- fips/fips.sh@89 -- # check_openssl_version 00:18:09.609 06:36:02 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:09.609 06:36:02 -- fips/fips.sh@85 -- # openssl version 00:18:09.609 06:36:02 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:09.609 06:36:02 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:09.609 06:36:02 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:09.609 06:36:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:09.609 06:36:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:09.609 06:36:02 -- scripts/common.sh@335 -- # IFS=.-: 00:18:09.609 06:36:02 -- scripts/common.sh@335 -- # read -ra ver1 00:18:09.609 06:36:02 -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.609 06:36:02 -- scripts/common.sh@336 -- # read -ra ver2 00:18:09.609 06:36:02 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:09.610 06:36:02 -- scripts/common.sh@339 -- # ver1_l=3 00:18:09.610 06:36:02 -- scripts/common.sh@340 -- # ver2_l=3 00:18:09.610 06:36:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:09.610 06:36:02 -- scripts/common.sh@343 -- # case "$op" in 00:18:09.610 06:36:02 -- scripts/common.sh@347 -- # : 1 00:18:09.610 06:36:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:09.610 06:36:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.610 06:36:02 -- scripts/common.sh@364 -- # decimal 3 00:18:09.610 06:36:02 -- scripts/common.sh@352 -- # local d=3 00:18:09.610 06:36:02 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:09.610 06:36:02 -- scripts/common.sh@354 -- # echo 3 00:18:09.610 06:36:02 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:09.610 06:36:02 -- scripts/common.sh@365 -- # decimal 3 00:18:09.610 06:36:02 -- scripts/common.sh@352 -- # local d=3 00:18:09.610 06:36:02 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:09.610 06:36:02 -- scripts/common.sh@354 -- # echo 3 00:18:09.610 06:36:02 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:09.610 06:36:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:09.610 06:36:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:09.610 06:36:02 -- scripts/common.sh@363 -- # (( v++ )) 00:18:09.610 06:36:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.610 06:36:02 -- scripts/common.sh@364 -- # decimal 1 00:18:09.610 06:36:02 -- scripts/common.sh@352 -- # local d=1 00:18:09.610 06:36:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.610 06:36:02 -- scripts/common.sh@354 -- # echo 1 00:18:09.610 06:36:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:09.610 06:36:02 -- scripts/common.sh@365 -- # decimal 0 00:18:09.610 06:36:02 -- scripts/common.sh@352 -- # local d=0 00:18:09.610 06:36:02 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:09.610 06:36:02 -- scripts/common.sh@354 -- # echo 0 00:18:09.610 06:36:02 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:09.610 06:36:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:09.610 06:36:02 -- scripts/common.sh@366 -- # return 0 00:18:09.610 06:36:02 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:09.610 06:36:02 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:09.610 06:36:02 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:09.610 06:36:02 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:09.610 06:36:02 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:09.610 06:36:02 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:09.610 06:36:02 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:09.610 06:36:02 -- fips/fips.sh@113 -- # build_openssl_config 00:18:09.610 06:36:02 -- fips/fips.sh@37 -- # cat 00:18:09.610 06:36:02 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:09.610 06:36:02 -- fips/fips.sh@58 -- # cat - 00:18:09.610 06:36:02 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:09.610 06:36:02 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:09.610 06:36:02 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:09.610 06:36:02 -- fips/fips.sh@116 -- # openssl list -providers 00:18:09.610 06:36:02 -- fips/fips.sh@116 -- # grep name 00:18:09.610 06:36:02 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:09.610 06:36:02 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:09.610 06:36:02 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:09.610 06:36:02 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:09.610 06:36:02 -- common/autotest_common.sh@640 -- # local es=0 00:18:09.610 06:36:02 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:09.610 06:36:02 -- common/autotest_common.sh@628 -- # local arg=openssl 00:18:09.610 06:36:02 -- fips/fips.sh@127 -- # : 00:18:09.610 06:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:09.610 06:36:02 -- common/autotest_common.sh@632 -- # type -t openssl 00:18:09.610 06:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:09.610 06:36:02 -- common/autotest_common.sh@634 -- # type -P openssl 00:18:09.610 06:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:09.610 06:36:02 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:18:09.610 06:36:02 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:18:09.610 06:36:02 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:18:09.610 Error setting digest 00:18:09.610 40B2EDB7B27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:09.610 40B2EDB7B27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:09.610 06:36:02 -- common/autotest_common.sh@643 -- # es=1 00:18:09.610 06:36:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:09.610 06:36:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:09.610 06:36:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:09.610 06:36:02 -- fips/fips.sh@130 -- # nvmftestinit 00:18:09.610 06:36:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:09.610 06:36:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.610 06:36:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:09.610 06:36:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:09.610 06:36:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:09.610 06:36:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.610 06:36:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.610 06:36:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.610 06:36:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:09.610 06:36:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:09.610 06:36:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:09.610 06:36:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:09.610 06:36:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:09.610 06:36:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:09.610 06:36:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.610 06:36:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.610 06:36:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:09.610 06:36:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:09.610 06:36:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.610 06:36:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.610 06:36:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.610 06:36:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.610 06:36:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.610 06:36:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.610 06:36:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.610 06:36:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.610 06:36:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:09.610 06:36:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:09.610 Cannot find device "nvmf_tgt_br" 00:18:09.610 06:36:02 -- nvmf/common.sh@154 -- # true 00:18:09.610 06:36:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.610 Cannot find device "nvmf_tgt_br2" 00:18:09.610 06:36:02 -- nvmf/common.sh@155 -- # true 00:18:09.610 06:36:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:09.610 06:36:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:09.610 Cannot find device "nvmf_tgt_br" 00:18:09.610 06:36:02 -- nvmf/common.sh@157 -- # true 00:18:09.610 06:36:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:09.610 Cannot find device "nvmf_tgt_br2" 00:18:09.610 06:36:02 -- nvmf/common.sh@158 -- # true 00:18:09.610 06:36:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:09.869 06:36:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:09.869 06:36:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.869 06:36:02 -- nvmf/common.sh@161 -- # true 00:18:09.869 06:36:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.869 06:36:02 -- nvmf/common.sh@162 -- # true 00:18:09.869 06:36:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.869 06:36:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.869 06:36:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.869 06:36:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.869 06:36:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.869 06:36:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.869 06:36:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.869 06:36:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:09.869 06:36:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:09.869 06:36:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:09.869 06:36:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:09.869 06:36:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:09.869 06:36:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:09.869 06:36:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.869 06:36:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.869 06:36:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.869 06:36:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:09.869 06:36:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:09.869 06:36:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.129 06:36:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.129 06:36:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.129 06:36:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.129 06:36:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.129 06:36:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:10.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:18:10.129 00:18:10.129 --- 10.0.0.2 ping statistics --- 00:18:10.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.129 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:10.129 06:36:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:10.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:18:10.129 00:18:10.129 --- 10.0.0.3 ping statistics --- 00:18:10.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.129 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:10.129 06:36:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:10.129 00:18:10.129 --- 10.0.0.1 ping statistics --- 00:18:10.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.129 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:10.129 06:36:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.129 06:36:02 -- nvmf/common.sh@421 -- # return 0 00:18:10.129 06:36:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.129 06:36:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.129 06:36:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:10.129 06:36:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:10.129 06:36:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.129 06:36:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:10.129 06:36:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:10.129 06:36:02 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:10.129 06:36:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:10.129 06:36:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:10.129 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:18:10.129 06:36:02 -- nvmf/common.sh@469 -- # nvmfpid=89627 00:18:10.129 06:36:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.129 06:36:02 -- nvmf/common.sh@470 -- # waitforlisten 89627 00:18:10.129 06:36:02 -- common/autotest_common.sh@819 -- # '[' -z 89627 ']' 00:18:10.129 06:36:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.129 06:36:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:10.129 06:36:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.129 06:36:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:10.129 06:36:02 -- common/autotest_common.sh@10 -- # set +x 00:18:10.129 [2024-10-04 06:36:02.723311] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:18:10.129 [2024-10-04 06:36:02.723406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.387 [2024-10-04 06:36:02.861497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.387 [2024-10-04 06:36:02.925408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.387 [2024-10-04 06:36:02.925567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.387 [2024-10-04 06:36:02.925579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.387 [2024-10-04 06:36:02.925587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.387 [2024-10-04 06:36:02.925610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.322 06:36:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:11.322 06:36:03 -- common/autotest_common.sh@852 -- # return 0 00:18:11.322 06:36:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.322 06:36:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:11.322 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:18:11.322 06:36:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.322 06:36:03 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:11.322 06:36:03 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:11.322 06:36:03 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:11.322 06:36:03 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:11.322 06:36:03 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:11.322 06:36:03 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:11.322 06:36:03 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:11.322 06:36:03 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.581 [2024-10-04 06:36:04.037368] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.581 [2024-10-04 06:36:04.053325] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.581 [2024-10-04 06:36:04.053501] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.581 malloc0 00:18:11.581 06:36:04 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.581 06:36:04 -- fips/fips.sh@147 -- # bdevperf_pid=89679 00:18:11.581 06:36:04 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.581 06:36:04 -- fips/fips.sh@148 -- # waitforlisten 89679 /var/tmp/bdevperf.sock 00:18:11.581 06:36:04 -- common/autotest_common.sh@819 -- # '[' -z 89679 ']' 00:18:11.581 06:36:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.581 06:36:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.581 06:36:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.581 06:36:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.581 06:36:04 -- common/autotest_common.sh@10 -- # set +x 00:18:11.581 [2024-10-04 06:36:04.190310] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:18:11.581 [2024-10-04 06:36:04.190407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89679 ] 00:18:11.839 [2024-10-04 06:36:04.322488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.840 [2024-10-04 06:36:04.424736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.777 06:36:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.777 06:36:05 -- common/autotest_common.sh@852 -- # return 0 00:18:12.777 06:36:05 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:12.777 [2024-10-04 06:36:05.412399] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.036 TLSTESTn1 00:18:13.036 06:36:05 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.036 Running I/O for 10 seconds... 00:18:23.004 00:18:23.004 Latency(us) 00:18:23.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.004 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:23.004 Verification LBA range: start 0x0 length 0x2000 00:18:23.004 TLSTESTn1 : 10.02 6182.95 24.15 0.00 0.00 20670.07 4587.52 23592.96 00:18:23.004 =================================================================================================================== 00:18:23.004 Total : 6182.95 24.15 0.00 0.00 20670.07 4587.52 23592.96 00:18:23.004 0 00:18:23.004 06:36:15 -- fips/fips.sh@1 -- # cleanup 00:18:23.004 06:36:15 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:23.004 06:36:15 -- common/autotest_common.sh@796 -- # type=--id 00:18:23.004 06:36:15 -- common/autotest_common.sh@797 -- # id=0 00:18:23.004 06:36:15 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:23.004 06:36:15 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:23.004 06:36:15 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:23.004 06:36:15 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:23.004 06:36:15 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:23.004 06:36:15 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:23.004 nvmf_trace.0 00:18:23.263 06:36:15 -- common/autotest_common.sh@811 -- # return 0 00:18:23.263 06:36:15 -- fips/fips.sh@16 -- # killprocess 89679 00:18:23.263 06:36:15 -- common/autotest_common.sh@926 -- # '[' -z 89679 ']' 00:18:23.263 06:36:15 -- common/autotest_common.sh@930 -- # kill -0 89679 00:18:23.263 06:36:15 -- common/autotest_common.sh@931 -- # uname 00:18:23.263 06:36:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:23.263 06:36:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89679 00:18:23.263 killing process with pid 89679 00:18:23.263 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.263 00:18:23.263 Latency(us) 00:18:23.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.263 =================================================================================================================== 00:18:23.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.263 06:36:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:23.263 06:36:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:23.263 06:36:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89679' 00:18:23.263 06:36:15 -- common/autotest_common.sh@945 -- # kill 89679 00:18:23.263 06:36:15 -- common/autotest_common.sh@950 -- # wait 89679 00:18:23.522 06:36:16 -- fips/fips.sh@17 -- # nvmftestfini 00:18:23.522 06:36:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:23.522 06:36:16 -- nvmf/common.sh@116 -- # sync 00:18:23.522 06:36:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:23.522 06:36:16 -- nvmf/common.sh@119 -- # set +e 00:18:23.522 06:36:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:23.522 06:36:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:23.522 rmmod nvme_tcp 00:18:23.522 rmmod nvme_fabrics 00:18:23.522 rmmod nvme_keyring 00:18:23.522 06:36:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:23.522 06:36:16 -- nvmf/common.sh@123 -- # set -e 00:18:23.522 06:36:16 -- nvmf/common.sh@124 -- # return 0 00:18:23.522 06:36:16 -- nvmf/common.sh@477 -- # '[' -n 89627 ']' 00:18:23.522 06:36:16 -- nvmf/common.sh@478 -- # killprocess 89627 00:18:23.522 06:36:16 -- common/autotest_common.sh@926 -- # '[' -z 89627 ']' 00:18:23.522 06:36:16 -- common/autotest_common.sh@930 -- # kill -0 89627 00:18:23.522 06:36:16 -- common/autotest_common.sh@931 -- # uname 00:18:23.522 06:36:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:23.522 06:36:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89627 00:18:23.522 killing process with pid 89627 00:18:23.522 06:36:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:23.522 06:36:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:23.522 06:36:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89627' 00:18:23.522 06:36:16 -- common/autotest_common.sh@945 -- # kill 89627 00:18:23.522 06:36:16 -- common/autotest_common.sh@950 -- # wait 89627 00:18:23.782 06:36:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:23.782 06:36:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:23.782 06:36:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:23.782 06:36:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.782 06:36:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:23.782 06:36:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.782 06:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.782 06:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.041 06:36:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:24.041 06:36:16 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:24.041 ************************************ 00:18:24.041 END TEST nvmf_fips 00:18:24.041 ************************************ 00:18:24.041 00:18:24.041 real 0m14.537s 00:18:24.041 user 0m19.263s 00:18:24.041 sys 0m6.168s 00:18:24.041 06:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.041 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:18:24.041 06:36:16 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:24.041 06:36:16 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:24.041 06:36:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:24.041 06:36:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:24.041 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:18:24.041 ************************************ 00:18:24.041 START TEST nvmf_fuzz 00:18:24.041 ************************************ 00:18:24.041 06:36:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:24.041 * Looking for test storage... 00:18:24.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:24.041 06:36:16 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.041 06:36:16 -- nvmf/common.sh@7 -- # uname -s 00:18:24.041 06:36:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.041 06:36:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.041 06:36:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.041 06:36:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.041 06:36:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.041 06:36:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.041 06:36:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.041 06:36:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.041 06:36:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.041 06:36:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.041 06:36:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:18:24.041 06:36:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:18:24.041 06:36:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.041 06:36:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.041 06:36:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.041 06:36:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.041 06:36:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.041 06:36:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.041 06:36:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.041 06:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.041 06:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.041 06:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.041 06:36:16 -- paths/export.sh@5 -- # export PATH 00:18:24.041 06:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.041 06:36:16 -- nvmf/common.sh@46 -- # : 0 00:18:24.041 06:36:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:24.041 06:36:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:24.041 06:36:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:24.041 06:36:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.041 06:36:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.041 06:36:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:24.041 06:36:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:24.042 06:36:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:24.042 06:36:16 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:24.042 06:36:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:24.042 06:36:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.042 06:36:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:24.042 06:36:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:24.042 06:36:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:24.042 06:36:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.042 06:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.042 06:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.042 06:36:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:24.042 06:36:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:24.042 06:36:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:24.042 06:36:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:24.042 06:36:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:24.042 06:36:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:24.042 06:36:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.042 06:36:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.042 06:36:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:24.042 06:36:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:24.042 06:36:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.042 06:36:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.042 06:36:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.042 06:36:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.042 06:36:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.042 06:36:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.042 06:36:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.042 06:36:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.042 06:36:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:24.042 06:36:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:24.042 Cannot find device "nvmf_tgt_br" 00:18:24.042 06:36:16 -- nvmf/common.sh@154 -- # true 00:18:24.042 06:36:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.042 Cannot find device "nvmf_tgt_br2" 00:18:24.042 06:36:16 -- nvmf/common.sh@155 -- # true 00:18:24.042 06:36:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:24.042 06:36:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:24.042 Cannot find device "nvmf_tgt_br" 00:18:24.042 06:36:16 -- nvmf/common.sh@157 -- # true 00:18:24.042 06:36:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:24.042 Cannot find device "nvmf_tgt_br2" 00:18:24.042 06:36:16 -- nvmf/common.sh@158 -- # true 00:18:24.042 06:36:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:24.301 06:36:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:24.301 06:36:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.301 06:36:16 -- nvmf/common.sh@161 -- # true 00:18:24.301 06:36:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.301 06:36:16 -- nvmf/common.sh@162 -- # true 00:18:24.301 06:36:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.301 06:36:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.301 06:36:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.301 06:36:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.301 06:36:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.301 06:36:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.301 06:36:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.301 06:36:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:24.301 06:36:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:24.301 06:36:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:24.301 06:36:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:24.301 06:36:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:24.301 06:36:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:24.301 06:36:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.301 06:36:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.301 06:36:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.301 06:36:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:24.301 06:36:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:24.301 06:36:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.301 06:36:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.301 06:36:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.301 06:36:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.301 06:36:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.301 06:36:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:24.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:18:24.301 00:18:24.301 --- 10.0.0.2 ping statistics --- 00:18:24.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.301 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:24.301 06:36:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:24.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:24.301 00:18:24.301 --- 10.0.0.3 ping statistics --- 00:18:24.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.301 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:24.301 06:36:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:18:24.301 00:18:24.301 --- 10.0.0.1 ping statistics --- 00:18:24.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.301 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:24.301 06:36:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.301 06:36:16 -- nvmf/common.sh@421 -- # return 0 00:18:24.301 06:36:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:24.301 06:36:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.301 06:36:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:24.301 06:36:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:24.301 06:36:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.301 06:36:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:24.301 06:36:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:24.301 06:36:16 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90026 00:18:24.302 06:36:16 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:24.302 06:36:16 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90026 00:18:24.302 06:36:16 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:24.302 06:36:16 -- common/autotest_common.sh@819 -- # '[' -z 90026 ']' 00:18:24.302 06:36:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.302 06:36:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:24.302 06:36:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.302 06:36:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:24.302 06:36:16 -- common/autotest_common.sh@10 -- # set +x 00:18:25.680 06:36:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:25.680 06:36:17 -- common/autotest_common.sh@852 -- # return 0 00:18:25.680 06:36:17 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:25.680 06:36:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.680 06:36:17 -- common/autotest_common.sh@10 -- # set +x 00:18:25.680 06:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.680 06:36:18 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:25.680 06:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.680 06:36:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.680 Malloc0 00:18:25.680 06:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.680 06:36:18 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:25.680 06:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.680 06:36:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.680 06:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.680 06:36:18 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:25.680 06:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.680 06:36:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.680 06:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.680 06:36:18 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.680 06:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:25.680 06:36:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.680 06:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:25.680 06:36:18 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:25.680 06:36:18 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:25.946 Shutting down the fuzz application 00:18:25.946 06:36:18 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:26.209 Shutting down the fuzz application 00:18:26.209 06:36:18 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:26.209 06:36:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.210 06:36:18 -- common/autotest_common.sh@10 -- # set +x 00:18:26.210 06:36:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.210 06:36:18 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:26.210 06:36:18 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:26.210 06:36:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:26.210 06:36:18 -- nvmf/common.sh@116 -- # sync 00:18:26.210 06:36:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:26.210 06:36:18 -- nvmf/common.sh@119 -- # set +e 00:18:26.210 06:36:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:26.210 06:36:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:26.468 rmmod nvme_tcp 00:18:26.468 rmmod nvme_fabrics 00:18:26.468 rmmod nvme_keyring 00:18:26.468 06:36:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:26.468 06:36:18 -- nvmf/common.sh@123 -- # set -e 00:18:26.468 06:36:18 -- nvmf/common.sh@124 -- # return 0 00:18:26.468 06:36:18 -- nvmf/common.sh@477 -- # '[' -n 90026 ']' 00:18:26.468 06:36:18 -- nvmf/common.sh@478 -- # killprocess 90026 00:18:26.468 06:36:18 -- common/autotest_common.sh@926 -- # '[' -z 90026 ']' 00:18:26.468 06:36:18 -- common/autotest_common.sh@930 -- # kill -0 90026 00:18:26.468 06:36:18 -- common/autotest_common.sh@931 -- # uname 00:18:26.468 06:36:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:26.469 06:36:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90026 00:18:26.469 06:36:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:26.469 06:36:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:26.469 killing process with pid 90026 00:18:26.469 06:36:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90026' 00:18:26.469 06:36:18 -- common/autotest_common.sh@945 -- # kill 90026 00:18:26.469 06:36:18 -- common/autotest_common.sh@950 -- # wait 90026 00:18:26.727 06:36:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:26.727 06:36:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:26.727 06:36:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:26.727 06:36:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.727 06:36:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:26.727 06:36:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.727 06:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.727 06:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.727 06:36:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:26.727 06:36:19 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:26.727 00:18:26.727 real 0m2.759s 00:18:26.727 user 0m2.934s 00:18:26.727 sys 0m0.691s 00:18:26.727 06:36:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.727 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:18:26.727 ************************************ 00:18:26.727 END TEST nvmf_fuzz 00:18:26.727 ************************************ 00:18:26.727 06:36:19 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:26.727 06:36:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:26.727 06:36:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.727 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:18:26.727 ************************************ 00:18:26.727 START TEST nvmf_multiconnection 00:18:26.727 ************************************ 00:18:26.727 06:36:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:26.727 * Looking for test storage... 00:18:26.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:26.727 06:36:19 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.986 06:36:19 -- nvmf/common.sh@7 -- # uname -s 00:18:26.986 06:36:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.986 06:36:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.986 06:36:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.986 06:36:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.986 06:36:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.986 06:36:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.986 06:36:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.986 06:36:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.986 06:36:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.986 06:36:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.986 06:36:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:18:26.986 06:36:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:18:26.986 06:36:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.986 06:36:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.986 06:36:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.986 06:36:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.986 06:36:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.986 06:36:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.986 06:36:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.986 06:36:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.986 06:36:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.986 06:36:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.986 06:36:19 -- paths/export.sh@5 -- # export PATH 00:18:26.986 06:36:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.986 06:36:19 -- nvmf/common.sh@46 -- # : 0 00:18:26.986 06:36:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:26.986 06:36:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:26.986 06:36:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:26.986 06:36:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.986 06:36:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.987 06:36:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:26.987 06:36:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:26.987 06:36:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:26.987 06:36:19 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.987 06:36:19 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.987 06:36:19 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:26.987 06:36:19 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:26.987 06:36:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:26.987 06:36:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.987 06:36:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:26.987 06:36:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:26.987 06:36:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:26.987 06:36:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.987 06:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.987 06:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.987 06:36:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:26.987 06:36:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:26.987 06:36:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:26.987 06:36:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:26.987 06:36:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:26.987 06:36:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:26.987 06:36:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.987 06:36:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.987 06:36:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:26.987 06:36:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:26.987 06:36:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.987 06:36:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.987 06:36:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.987 06:36:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.987 06:36:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.987 06:36:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.987 06:36:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.987 06:36:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.987 06:36:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:26.987 06:36:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:26.987 Cannot find device "nvmf_tgt_br" 00:18:26.987 06:36:19 -- nvmf/common.sh@154 -- # true 00:18:26.987 06:36:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.987 Cannot find device "nvmf_tgt_br2" 00:18:26.987 06:36:19 -- nvmf/common.sh@155 -- # true 00:18:26.987 06:36:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:26.987 06:36:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:26.987 Cannot find device "nvmf_tgt_br" 00:18:26.987 06:36:19 -- nvmf/common.sh@157 -- # true 00:18:26.987 06:36:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:26.987 Cannot find device "nvmf_tgt_br2" 00:18:26.987 06:36:19 -- nvmf/common.sh@158 -- # true 00:18:26.987 06:36:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:26.987 06:36:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:26.987 06:36:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.987 06:36:19 -- nvmf/common.sh@161 -- # true 00:18:26.987 06:36:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.987 06:36:19 -- nvmf/common.sh@162 -- # true 00:18:26.987 06:36:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.987 06:36:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.987 06:36:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.987 06:36:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:26.987 06:36:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:27.246 06:36:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:27.246 06:36:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:27.246 06:36:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:27.246 06:36:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:27.246 06:36:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:27.246 06:36:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:27.246 06:36:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:27.246 06:36:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:27.246 06:36:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:27.246 06:36:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:27.246 06:36:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:27.246 06:36:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:27.246 06:36:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:27.246 06:36:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:27.246 06:36:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:27.246 06:36:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:27.246 06:36:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:27.246 06:36:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:27.246 06:36:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:27.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:18:27.246 00:18:27.246 --- 10.0.0.2 ping statistics --- 00:18:27.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.246 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:27.246 06:36:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:27.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:27.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:27.246 00:18:27.246 --- 10.0.0.3 ping statistics --- 00:18:27.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.246 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:27.246 06:36:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:27.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:18:27.246 00:18:27.246 --- 10.0.0.1 ping statistics --- 00:18:27.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.246 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:27.246 06:36:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.246 06:36:19 -- nvmf/common.sh@421 -- # return 0 00:18:27.246 06:36:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:27.246 06:36:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.246 06:36:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:27.246 06:36:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:27.246 06:36:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.246 06:36:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:27.246 06:36:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:27.246 06:36:19 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:27.246 06:36:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.246 06:36:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:27.246 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.246 06:36:19 -- nvmf/common.sh@469 -- # nvmfpid=90239 00:18:27.246 06:36:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.246 06:36:19 -- nvmf/common.sh@470 -- # waitforlisten 90239 00:18:27.246 06:36:19 -- common/autotest_common.sh@819 -- # '[' -z 90239 ']' 00:18:27.246 06:36:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.246 06:36:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:27.246 06:36:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.246 06:36:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:27.246 06:36:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.246 [2024-10-04 06:36:19.917084] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:18:27.246 [2024-10-04 06:36:19.917192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.506 [2024-10-04 06:36:20.051601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.506 [2024-10-04 06:36:20.127013] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.506 [2024-10-04 06:36:20.127184] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.506 [2024-10-04 06:36:20.127198] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.506 [2024-10-04 06:36:20.127206] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.506 [2024-10-04 06:36:20.127380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.506 [2024-10-04 06:36:20.127663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.506 [2024-10-04 06:36:20.128180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.506 [2024-10-04 06:36:20.128227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.441 06:36:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.441 06:36:20 -- common/autotest_common.sh@852 -- # return 0 00:18:28.441 06:36:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.441 06:36:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:28.441 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 06:36:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.441 06:36:20 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.441 06:36:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 [2024-10-04 06:36:20.977083] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:28.441 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.441 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 Malloc1 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 [2024-10-04 06:36:21.063104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.441 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 Malloc2 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.441 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.441 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:28.441 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.441 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.700 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 Malloc3 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.700 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 Malloc4 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.700 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 Malloc5 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.700 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 Malloc6 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.700 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.700 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 Malloc7 00:18:28.700 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.700 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:28.700 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.960 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 Malloc8 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.960 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 Malloc9 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.960 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 Malloc10 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.960 06:36:21 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 Malloc11 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.960 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.960 06:36:21 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:28.960 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.960 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:29.219 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:29.219 06:36:21 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:29.219 06:36:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:29.219 06:36:21 -- common/autotest_common.sh@10 -- # set +x 00:18:29.219 06:36:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:29.219 06:36:21 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:29.219 06:36:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.219 06:36:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:29.219 06:36:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:29.219 06:36:21 -- common/autotest_common.sh@1177 -- # local i=0 00:18:29.219 06:36:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.219 06:36:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:29.219 06:36:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:31.761 06:36:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:31.761 06:36:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:31.761 06:36:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:31.761 06:36:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:31.762 06:36:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.762 06:36:23 -- common/autotest_common.sh@1187 -- # return 0 00:18:31.762 06:36:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.762 06:36:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:31.762 06:36:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:31.762 06:36:24 -- common/autotest_common.sh@1177 -- # local i=0 00:18:31.762 06:36:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.762 06:36:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:31.762 06:36:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:33.667 06:36:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:33.667 06:36:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:33.667 06:36:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:33.667 06:36:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:33.667 06:36:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.667 06:36:26 -- common/autotest_common.sh@1187 -- # return 0 00:18:33.667 06:36:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:33.667 06:36:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:33.667 06:36:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:33.667 06:36:26 -- common/autotest_common.sh@1177 -- # local i=0 00:18:33.667 06:36:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.667 06:36:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:33.667 06:36:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:35.567 06:36:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:35.568 06:36:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:35.568 06:36:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:35.826 06:36:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:35.826 06:36:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.826 06:36:28 -- common/autotest_common.sh@1187 -- # return 0 00:18:35.826 06:36:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.826 06:36:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:35.826 06:36:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:35.826 06:36:28 -- common/autotest_common.sh@1177 -- # local i=0 00:18:35.826 06:36:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:35.826 06:36:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:35.826 06:36:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:38.363 06:36:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:38.363 06:36:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:38.364 06:36:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:38.364 06:36:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:38.364 06:36:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.364 06:36:30 -- common/autotest_common.sh@1187 -- # return 0 00:18:38.364 06:36:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:38.364 06:36:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:38.364 06:36:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:38.364 06:36:30 -- common/autotest_common.sh@1177 -- # local i=0 00:18:38.364 06:36:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.364 06:36:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:38.364 06:36:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:40.270 06:36:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:40.270 06:36:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:40.270 06:36:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:40.270 06:36:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:40.270 06:36:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.270 06:36:32 -- common/autotest_common.sh@1187 -- # return 0 00:18:40.270 06:36:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.270 06:36:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:40.270 06:36:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:40.270 06:36:32 -- common/autotest_common.sh@1177 -- # local i=0 00:18:40.270 06:36:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.270 06:36:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:40.270 06:36:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:42.211 06:36:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:42.211 06:36:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:42.211 06:36:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:42.211 06:36:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:42.211 06:36:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.211 06:36:34 -- common/autotest_common.sh@1187 -- # return 0 00:18:42.211 06:36:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.211 06:36:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:42.469 06:36:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:42.469 06:36:35 -- common/autotest_common.sh@1177 -- # local i=0 00:18:42.469 06:36:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.469 06:36:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:42.469 06:36:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:44.375 06:36:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:44.375 06:36:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:44.375 06:36:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:44.375 06:36:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:44.375 06:36:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.375 06:36:37 -- common/autotest_common.sh@1187 -- # return 0 00:18:44.375 06:36:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.375 06:36:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:44.634 06:36:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:44.634 06:36:37 -- common/autotest_common.sh@1177 -- # local i=0 00:18:44.634 06:36:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.634 06:36:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:44.634 06:36:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:47.170 06:36:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:47.170 06:36:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:47.170 06:36:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:47.170 06:36:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:47.170 06:36:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.170 06:36:39 -- common/autotest_common.sh@1187 -- # return 0 00:18:47.170 06:36:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.170 06:36:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:47.170 06:36:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:47.170 06:36:39 -- common/autotest_common.sh@1177 -- # local i=0 00:18:47.170 06:36:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.170 06:36:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:47.170 06:36:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:49.074 06:36:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:49.074 06:36:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:49.074 06:36:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:49.074 06:36:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:49.074 06:36:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.074 06:36:41 -- common/autotest_common.sh@1187 -- # return 0 00:18:49.074 06:36:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.074 06:36:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:49.074 06:36:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:49.074 06:36:41 -- common/autotest_common.sh@1177 -- # local i=0 00:18:49.074 06:36:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.074 06:36:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:49.074 06:36:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:51.606 06:36:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:51.606 06:36:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:51.606 06:36:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:51.606 06:36:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:51.606 06:36:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.606 06:36:43 -- common/autotest_common.sh@1187 -- # return 0 00:18:51.606 06:36:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:51.606 06:36:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:51.606 06:36:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:51.606 06:36:43 -- common/autotest_common.sh@1177 -- # local i=0 00:18:51.606 06:36:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.606 06:36:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:51.606 06:36:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:53.508 06:36:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:53.508 06:36:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:53.508 06:36:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:53.508 06:36:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:53.508 06:36:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.508 06:36:45 -- common/autotest_common.sh@1187 -- # return 0 00:18:53.508 06:36:45 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:53.508 [global] 00:18:53.508 thread=1 00:18:53.508 invalidate=1 00:18:53.508 rw=read 00:18:53.508 time_based=1 00:18:53.508 runtime=10 00:18:53.508 ioengine=libaio 00:18:53.508 direct=1 00:18:53.508 bs=262144 00:18:53.508 iodepth=64 00:18:53.508 norandommap=1 00:18:53.508 numjobs=1 00:18:53.508 00:18:53.508 [job0] 00:18:53.508 filename=/dev/nvme0n1 00:18:53.508 [job1] 00:18:53.508 filename=/dev/nvme10n1 00:18:53.508 [job2] 00:18:53.508 filename=/dev/nvme1n1 00:18:53.508 [job3] 00:18:53.508 filename=/dev/nvme2n1 00:18:53.508 [job4] 00:18:53.508 filename=/dev/nvme3n1 00:18:53.508 [job5] 00:18:53.508 filename=/dev/nvme4n1 00:18:53.508 [job6] 00:18:53.508 filename=/dev/nvme5n1 00:18:53.508 [job7] 00:18:53.508 filename=/dev/nvme6n1 00:18:53.508 [job8] 00:18:53.508 filename=/dev/nvme7n1 00:18:53.508 [job9] 00:18:53.508 filename=/dev/nvme8n1 00:18:53.508 [job10] 00:18:53.508 filename=/dev/nvme9n1 00:18:53.508 Could not set queue depth (nvme0n1) 00:18:53.508 Could not set queue depth (nvme10n1) 00:18:53.508 Could not set queue depth (nvme1n1) 00:18:53.508 Could not set queue depth (nvme2n1) 00:18:53.508 Could not set queue depth (nvme3n1) 00:18:53.508 Could not set queue depth (nvme4n1) 00:18:53.508 Could not set queue depth (nvme5n1) 00:18:53.508 Could not set queue depth (nvme6n1) 00:18:53.508 Could not set queue depth (nvme7n1) 00:18:53.508 Could not set queue depth (nvme8n1) 00:18:53.508 Could not set queue depth (nvme9n1) 00:18:53.767 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.767 fio-3.35 00:18:53.767 Starting 11 threads 00:19:05.975 00:19:05.975 job0: (groupid=0, jobs=1): err= 0: pid=90717: Fri Oct 4 06:36:56 2024 00:19:05.975 read: IOPS=572, BW=143MiB/s (150MB/s)(1446MiB/10109msec) 00:19:05.975 slat (usec): min=16, max=65745, avg=1682.86, stdev=6004.65 00:19:05.975 clat (msec): min=18, max=220, avg=109.92, stdev=23.75 00:19:05.975 lat (msec): min=20, max=221, avg=111.61, stdev=24.51 00:19:05.975 clat percentiles (msec): 00:19:05.975 | 1.00th=[ 33], 5.00th=[ 55], 10.00th=[ 84], 20.00th=[ 99], 00:19:05.975 | 30.00th=[ 107], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 117], 00:19:05.975 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 132], 95.00th=[ 138], 00:19:05.975 | 99.00th=[ 155], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 222], 00:19:05.975 | 99.99th=[ 222] 00:19:05.975 bw ( KiB/s): min=120320, max=269312, per=9.03%, avg=146344.95, stdev=30816.40, samples=20 00:19:05.975 iops : min= 470, max= 1052, avg=571.50, stdev=120.41, samples=20 00:19:05.975 lat (msec) : 20=0.02%, 50=4.22%, 100=17.72%, 250=78.04% 00:19:05.975 cpu : usr=0.33%, sys=2.07%, ctx=1121, majf=0, minf=4097 00:19:05.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:05.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.975 issued rwts: total=5784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.975 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.975 job1: (groupid=0, jobs=1): err= 0: pid=90718: Fri Oct 4 06:36:56 2024 00:19:05.975 read: IOPS=516, BW=129MiB/s (135MB/s)(1302MiB/10075msec) 00:19:05.975 slat (usec): min=16, max=152886, avg=1797.35, stdev=6871.91 00:19:05.975 clat (msec): min=21, max=359, avg=121.88, stdev=31.23 00:19:05.975 lat (msec): min=21, max=381, avg=123.67, stdev=32.26 00:19:05.975 clat percentiles (msec): 00:19:05.975 | 1.00th=[ 52], 5.00th=[ 83], 10.00th=[ 99], 20.00th=[ 107], 00:19:05.975 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 122], 00:19:05.975 | 70.00th=[ 126], 80.00th=[ 131], 90.00th=[ 144], 95.00th=[ 207], 00:19:05.975 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 284], 99.95th=[ 359], 00:19:05.975 | 99.99th=[ 359] 00:19:05.975 bw ( KiB/s): min=62976, max=163840, per=8.13%, avg=131659.00, stdev=21524.07, samples=20 00:19:05.975 iops : min= 246, max= 640, avg=514.20, stdev=84.05, samples=20 00:19:05.975 lat (msec) : 50=0.94%, 100=10.10%, 250=88.61%, 500=0.35% 00:19:05.975 cpu : usr=0.09%, sys=1.81%, ctx=1232, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=5207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job2: (groupid=0, jobs=1): err= 0: pid=90719: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=505, BW=126MiB/s (133MB/s)(1276MiB/10089msec) 00:19:05.976 slat (usec): min=15, max=116219, avg=1887.69, stdev=6588.12 00:19:05.976 clat (msec): min=25, max=336, avg=124.41, stdev=30.13 00:19:05.976 lat (msec): min=26, max=336, avg=126.30, stdev=30.98 00:19:05.976 clat percentiles (msec): 00:19:05.976 | 1.00th=[ 68], 5.00th=[ 95], 10.00th=[ 101], 20.00th=[ 108], 00:19:05.976 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 120], 60.00th=[ 124], 00:19:05.976 | 70.00th=[ 128], 80.00th=[ 132], 90.00th=[ 142], 95.00th=[ 203], 00:19:05.976 | 99.00th=[ 236], 99.50th=[ 251], 99.90th=[ 275], 99.95th=[ 275], 00:19:05.976 | 99.99th=[ 338] 00:19:05.976 bw ( KiB/s): min=63615, max=160256, per=7.96%, avg=128991.15, stdev=20523.40, samples=20 00:19:05.976 iops : min= 248, max= 626, avg=503.65, stdev=80.25, samples=20 00:19:05.976 lat (msec) : 50=0.24%, 100=8.96%, 250=90.30%, 500=0.51% 00:19:05.976 cpu : usr=0.20%, sys=1.79%, ctx=1155, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=5102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job3: (groupid=0, jobs=1): err= 0: pid=90720: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=514, BW=129MiB/s (135MB/s)(1298MiB/10081msec) 00:19:05.976 slat (usec): min=19, max=100227, avg=1921.20, stdev=6719.26 00:19:05.976 clat (msec): min=29, max=280, avg=122.10, stdev=33.60 00:19:05.976 lat (msec): min=29, max=316, avg=124.02, stdev=34.50 00:19:05.976 clat percentiles (msec): 00:19:05.976 | 1.00th=[ 45], 5.00th=[ 83], 10.00th=[ 96], 20.00th=[ 105], 00:19:05.976 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 122], 00:19:05.976 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 150], 95.00th=[ 213], 00:19:05.976 | 99.00th=[ 232], 99.50th=[ 245], 99.90th=[ 279], 99.95th=[ 279], 00:19:05.976 | 99.99th=[ 279] 00:19:05.976 bw ( KiB/s): min=68096, max=185344, per=8.10%, avg=131236.80, stdev=27133.75, samples=20 00:19:05.976 iops : min= 266, max= 724, avg=512.50, stdev=105.98, samples=20 00:19:05.976 lat (msec) : 50=1.48%, 100=11.98%, 250=86.23%, 500=0.31% 00:19:05.976 cpu : usr=0.22%, sys=1.61%, ctx=1032, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=5191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job4: (groupid=0, jobs=1): err= 0: pid=90721: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=614, BW=154MiB/s (161MB/s)(1549MiB/10085msec) 00:19:05.976 slat (usec): min=20, max=125741, avg=1546.75, stdev=5539.89 00:19:05.976 clat (msec): min=40, max=227, avg=102.45, stdev=26.45 00:19:05.976 lat (msec): min=40, max=299, avg=104.00, stdev=27.08 00:19:05.976 clat percentiles (msec): 00:19:05.976 | 1.00th=[ 59], 5.00th=[ 70], 10.00th=[ 74], 20.00th=[ 80], 00:19:05.976 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 111], 00:19:05.976 | 70.00th=[ 118], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 142], 00:19:05.976 | 99.00th=[ 197], 99.50th=[ 211], 99.90th=[ 224], 99.95th=[ 224], 00:19:05.976 | 99.99th=[ 228] 00:19:05.976 bw ( KiB/s): min=120832, max=209920, per=9.69%, avg=156946.40, stdev=32632.33, samples=20 00:19:05.976 iops : min= 472, max= 820, avg=612.95, stdev=127.52, samples=20 00:19:05.976 lat (msec) : 50=0.26%, 100=51.38%, 250=48.36% 00:19:05.976 cpu : usr=0.23%, sys=2.21%, ctx=1293, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=6195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job5: (groupid=0, jobs=1): err= 0: pid=90722: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=627, BW=157MiB/s (164MB/s)(1585MiB/10102msec) 00:19:05.976 slat (usec): min=16, max=70759, avg=1546.38, stdev=5221.03 00:19:05.976 clat (msec): min=20, max=231, avg=100.24, stdev=25.42 00:19:05.976 lat (msec): min=20, max=231, avg=101.79, stdev=26.05 00:19:05.976 clat percentiles (msec): 00:19:05.976 | 1.00th=[ 42], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 79], 00:19:05.976 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 97], 60.00th=[ 109], 00:19:05.976 | 70.00th=[ 117], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 140], 00:19:05.976 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 218], 99.95th=[ 218], 00:19:05.976 | 99.99th=[ 232] 00:19:05.976 bw ( KiB/s): min=110813, max=221696, per=9.91%, avg=160497.00, stdev=34477.75, samples=20 00:19:05.976 iops : min= 432, max= 866, avg=626.75, stdev=134.73, samples=20 00:19:05.976 lat (msec) : 50=1.58%, 100=51.81%, 250=46.61% 00:19:05.976 cpu : usr=0.20%, sys=1.97%, ctx=1346, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=6338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job6: (groupid=0, jobs=1): err= 0: pid=90723: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=733, BW=183MiB/s (192MB/s)(1844MiB/10059msec) 00:19:05.976 slat (usec): min=20, max=68861, avg=1323.14, stdev=4817.02 00:19:05.976 clat (msec): min=28, max=174, avg=85.78, stdev=26.46 00:19:05.976 lat (msec): min=28, max=206, avg=87.10, stdev=26.98 00:19:05.976 clat percentiles (msec): 00:19:05.976 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 53], 20.00th=[ 67], 00:19:05.976 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 86], 00:19:05.976 | 70.00th=[ 100], 80.00th=[ 113], 90.00th=[ 125], 95.00th=[ 132], 00:19:05.976 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 171], 00:19:05.976 | 99.99th=[ 176] 00:19:05.976 bw ( KiB/s): min=130290, max=332800, per=11.55%, avg=187043.45, stdev=49495.97, samples=20 00:19:05.976 iops : min= 508, max= 1300, avg=730.40, stdev=193.47, samples=20 00:19:05.976 lat (msec) : 50=9.10%, 100=62.06%, 250=28.84% 00:19:05.976 cpu : usr=0.27%, sys=2.23%, ctx=1419, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=7374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job7: (groupid=0, jobs=1): err= 0: pid=90724: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=600, BW=150MiB/s (157MB/s)(1517MiB/10107msec) 00:19:05.976 slat (usec): min=15, max=154461, avg=1549.82, stdev=6284.68 00:19:05.976 clat (usec): min=923, max=238805, avg=104813.14, stdev=34874.67 00:19:05.976 lat (usec): min=951, max=301081, avg=106362.96, stdev=35761.50 00:19:05.976 clat percentiles (msec): 00:19:05.976 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 83], 00:19:05.976 | 30.00th=[ 102], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 120], 00:19:05.976 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 134], 95.00th=[ 142], 00:19:05.976 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 232], 99.95th=[ 232], 00:19:05.976 | 99.99th=[ 239] 00:19:05.976 bw ( KiB/s): min=113664, max=351744, per=9.48%, avg=153621.35, stdev=56709.53, samples=20 00:19:05.976 iops : min= 444, max= 1374, avg=599.85, stdev=221.61, samples=20 00:19:05.976 lat (usec) : 1000=0.02% 00:19:05.976 lat (msec) : 2=0.23%, 4=0.05%, 10=0.87%, 20=1.85%, 50=7.88% 00:19:05.976 lat (msec) : 100=17.80%, 250=71.31% 00:19:05.976 cpu : usr=0.23%, sys=2.09%, ctx=1204, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=6068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job8: (groupid=0, jobs=1): err= 0: pid=90725: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=494, BW=124MiB/s (130MB/s)(1248MiB/10095msec) 00:19:05.976 slat (usec): min=16, max=80532, avg=1941.59, stdev=6632.57 00:19:05.976 clat (msec): min=32, max=294, avg=127.24, stdev=30.81 00:19:05.976 lat (msec): min=32, max=295, avg=129.18, stdev=31.83 00:19:05.976 clat percentiles (msec): 00:19:05.976 | 1.00th=[ 57], 5.00th=[ 93], 10.00th=[ 103], 20.00th=[ 111], 00:19:05.976 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 127], 00:19:05.976 | 70.00th=[ 131], 80.00th=[ 136], 90.00th=[ 150], 95.00th=[ 209], 00:19:05.976 | 99.00th=[ 236], 99.50th=[ 241], 99.90th=[ 247], 99.95th=[ 268], 00:19:05.976 | 99.99th=[ 296] 00:19:05.976 bw ( KiB/s): min=65667, max=148992, per=7.79%, avg=126111.65, stdev=19094.62, samples=20 00:19:05.976 iops : min= 256, max= 582, avg=492.50, stdev=74.66, samples=20 00:19:05.976 lat (msec) : 50=0.50%, 100=6.85%, 250=92.57%, 500=0.08% 00:19:05.976 cpu : usr=0.24%, sys=1.60%, ctx=1144, majf=0, minf=4097 00:19:05.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:05.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.976 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.976 job9: (groupid=0, jobs=1): err= 0: pid=90726: Fri Oct 4 06:36:56 2024 00:19:05.976 read: IOPS=509, BW=127MiB/s (133MB/s)(1284MiB/10089msec) 00:19:05.976 slat (usec): min=17, max=108664, avg=1901.32, stdev=6442.01 00:19:05.976 clat (msec): min=19, max=328, avg=123.47, stdev=31.89 00:19:05.977 lat (msec): min=19, max=328, avg=125.37, stdev=32.75 00:19:05.977 clat percentiles (msec): 00:19:05.977 | 1.00th=[ 74], 5.00th=[ 95], 10.00th=[ 102], 20.00th=[ 107], 00:19:05.977 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 120], 00:19:05.977 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 213], 00:19:05.977 | 99.00th=[ 241], 99.50th=[ 255], 99.90th=[ 275], 99.95th=[ 288], 00:19:05.977 | 99.99th=[ 330] 00:19:05.977 bw ( KiB/s): min=71168, max=148480, per=8.02%, avg=129865.40, stdev=21735.46, samples=20 00:19:05.977 iops : min= 278, max= 580, avg=507.15, stdev=84.83, samples=20 00:19:05.977 lat (msec) : 20=0.19%, 50=0.18%, 100=8.20%, 250=90.75%, 500=0.68% 00:19:05.977 cpu : usr=0.28%, sys=2.07%, ctx=862, majf=0, minf=4097 00:19:05.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:05.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.977 issued rwts: total=5136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.977 job10: (groupid=0, jobs=1): err= 0: pid=90727: Fri Oct 4 06:36:56 2024 00:19:05.977 read: IOPS=654, BW=164MiB/s (172MB/s)(1645MiB/10053msec) 00:19:05.977 slat (usec): min=15, max=60934, avg=1468.20, stdev=5176.81 00:19:05.977 clat (msec): min=26, max=258, avg=96.12, stdev=37.68 00:19:05.977 lat (msec): min=26, max=280, avg=97.58, stdev=38.44 00:19:05.977 clat percentiles (msec): 00:19:05.977 | 1.00th=[ 49], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 70], 00:19:05.977 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 95], 00:19:05.977 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 171], 00:19:05.977 | 99.00th=[ 236], 99.50th=[ 236], 99.90th=[ 259], 99.95th=[ 259], 00:19:05.977 | 99.99th=[ 259] 00:19:05.977 bw ( KiB/s): min=67072, max=242688, per=10.30%, avg=166774.45, stdev=51571.79, samples=20 00:19:05.977 iops : min= 262, max= 948, avg=651.35, stdev=201.43, samples=20 00:19:05.977 lat (msec) : 50=1.08%, 100=62.25%, 250=36.53%, 500=0.14% 00:19:05.977 cpu : usr=0.19%, sys=2.18%, ctx=1260, majf=0, minf=4097 00:19:05.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:05.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.977 issued rwts: total=6578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.977 00:19:05.977 Run status group 0 (all jobs): 00:19:05.977 READ: bw=1582MiB/s (1659MB/s), 124MiB/s-183MiB/s (130MB/s-192MB/s), io=15.6GiB (16.8GB), run=10053-10109msec 00:19:05.977 00:19:05.977 Disk stats (read/write): 00:19:05.977 nvme0n1: ios=11477/0, merge=0/0, ticks=1241265/0, in_queue=1241265, util=97.67% 00:19:05.977 nvme10n1: ios=10286/0, merge=0/0, ticks=1238562/0, in_queue=1238562, util=97.28% 00:19:05.977 nvme1n1: ios=10092/0, merge=0/0, ticks=1239734/0, in_queue=1239734, util=97.66% 00:19:05.977 nvme2n1: ios=10262/0, merge=0/0, ticks=1237492/0, in_queue=1237492, util=97.50% 00:19:05.977 nvme3n1: ios=12262/0, merge=0/0, ticks=1235359/0, in_queue=1235359, util=97.70% 00:19:05.977 nvme4n1: ios=12579/0, merge=0/0, ticks=1236398/0, in_queue=1236398, util=97.68% 00:19:05.977 nvme5n1: ios=14691/0, merge=0/0, ticks=1238245/0, in_queue=1238245, util=97.83% 00:19:05.977 nvme6n1: ios=12009/0, merge=0/0, ticks=1237096/0, in_queue=1237096, util=98.34% 00:19:05.977 nvme7n1: ios=9852/0, merge=0/0, ticks=1239631/0, in_queue=1239631, util=98.27% 00:19:05.977 nvme8n1: ios=10171/0, merge=0/0, ticks=1238645/0, in_queue=1238645, util=98.78% 00:19:05.977 nvme9n1: ios=13038/0, merge=0/0, ticks=1241654/0, in_queue=1241654, util=98.60% 00:19:05.977 06:36:56 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:05.977 [global] 00:19:05.977 thread=1 00:19:05.977 invalidate=1 00:19:05.977 rw=randwrite 00:19:05.977 time_based=1 00:19:05.977 runtime=10 00:19:05.977 ioengine=libaio 00:19:05.977 direct=1 00:19:05.977 bs=262144 00:19:05.977 iodepth=64 00:19:05.977 norandommap=1 00:19:05.977 numjobs=1 00:19:05.977 00:19:05.977 [job0] 00:19:05.977 filename=/dev/nvme0n1 00:19:05.977 [job1] 00:19:05.977 filename=/dev/nvme10n1 00:19:05.977 [job2] 00:19:05.977 filename=/dev/nvme1n1 00:19:05.977 [job3] 00:19:05.977 filename=/dev/nvme2n1 00:19:05.977 [job4] 00:19:05.977 filename=/dev/nvme3n1 00:19:05.977 [job5] 00:19:05.977 filename=/dev/nvme4n1 00:19:05.977 [job6] 00:19:05.977 filename=/dev/nvme5n1 00:19:05.977 [job7] 00:19:05.977 filename=/dev/nvme6n1 00:19:05.977 [job8] 00:19:05.977 filename=/dev/nvme7n1 00:19:05.977 [job9] 00:19:05.977 filename=/dev/nvme8n1 00:19:05.977 [job10] 00:19:05.977 filename=/dev/nvme9n1 00:19:05.977 Could not set queue depth (nvme0n1) 00:19:05.977 Could not set queue depth (nvme10n1) 00:19:05.977 Could not set queue depth (nvme1n1) 00:19:05.977 Could not set queue depth (nvme2n1) 00:19:05.977 Could not set queue depth (nvme3n1) 00:19:05.977 Could not set queue depth (nvme4n1) 00:19:05.977 Could not set queue depth (nvme5n1) 00:19:05.977 Could not set queue depth (nvme6n1) 00:19:05.977 Could not set queue depth (nvme7n1) 00:19:05.977 Could not set queue depth (nvme8n1) 00:19:05.977 Could not set queue depth (nvme9n1) 00:19:05.977 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.977 fio-3.35 00:19:05.977 Starting 11 threads 00:19:15.972 00:19:15.972 job0: (groupid=0, jobs=1): err= 0: pid=90923: Fri Oct 4 06:37:07 2024 00:19:15.972 write: IOPS=1125, BW=281MiB/s (295MB/s)(2831MiB/10058msec); 0 zone resets 00:19:15.972 slat (usec): min=25, max=14134, avg=857.27, stdev=1724.26 00:19:15.972 clat (msec): min=5, max=176, avg=55.93, stdev=30.39 00:19:15.972 lat (msec): min=5, max=177, avg=56.78, stdev=30.83 00:19:15.972 clat percentiles (msec): 00:19:15.972 | 1.00th=[ 28], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:19:15.972 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 46], 60.00th=[ 47], 00:19:15.972 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 89], 95.00th=[ 146], 00:19:15.972 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 178], 00:19:15.972 | 99.99th=[ 178] 00:19:15.972 bw ( KiB/s): min=104448, max=373248, per=21.83%, avg=288233.50, stdev=102115.08, samples=20 00:19:15.972 iops : min= 408, max= 1458, avg=1125.85, stdev=398.86, samples=20 00:19:15.972 lat (msec) : 10=0.16%, 20=0.39%, 50=85.57%, 100=4.04%, 250=9.85% 00:19:15.972 cpu : usr=3.05%, sys=2.63%, ctx=15400, majf=0, minf=1 00:19:15.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:15.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.972 issued rwts: total=0,11325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.972 job1: (groupid=0, jobs=1): err= 0: pid=90924: Fri Oct 4 06:37:07 2024 00:19:15.972 write: IOPS=454, BW=114MiB/s (119MB/s)(1149MiB/10117msec); 0 zone resets 00:19:15.972 slat (usec): min=19, max=64349, avg=2134.82, stdev=3803.85 00:19:15.972 clat (msec): min=11, max=237, avg=138.73, stdev=13.35 00:19:15.972 lat (msec): min=12, max=237, avg=140.87, stdev=12.97 00:19:15.972 clat percentiles (msec): 00:19:15.972 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 133], 00:19:15.972 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 140], 00:19:15.972 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 144], 95.00th=[ 148], 00:19:15.972 | 99.00th=[ 197], 99.50th=[ 205], 99.90th=[ 226], 99.95th=[ 232], 00:19:15.972 | 99.99th=[ 239] 00:19:15.972 bw ( KiB/s): min=86016, max=122880, per=8.79%, avg=116007.75, stdev=7584.48, samples=20 00:19:15.972 iops : min= 336, max= 480, avg=453.15, stdev=29.63, samples=20 00:19:15.972 lat (msec) : 20=0.24%, 100=0.35%, 250=99.41% 00:19:15.972 cpu : usr=1.40%, sys=1.40%, ctx=6038, majf=0, minf=1 00:19:15.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:15.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.972 issued rwts: total=0,4595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.972 job2: (groupid=0, jobs=1): err= 0: pid=90935: Fri Oct 4 06:37:07 2024 00:19:15.972 write: IOPS=314, BW=78.7MiB/s (82.5MB/s)(801MiB/10173msec); 0 zone resets 00:19:15.972 slat (usec): min=21, max=75188, avg=3068.87, stdev=5998.91 00:19:15.972 clat (msec): min=6, max=280, avg=200.03, stdev=35.42 00:19:15.972 lat (msec): min=7, max=281, avg=203.10, stdev=35.49 00:19:15.972 clat percentiles (msec): 00:19:15.972 | 1.00th=[ 54], 5.00th=[ 124], 10.00th=[ 180], 20.00th=[ 188], 00:19:15.972 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 209], 00:19:15.972 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 236], 95.00th=[ 247], 00:19:15.972 | 99.00th=[ 259], 99.50th=[ 266], 99.90th=[ 279], 99.95th=[ 279], 00:19:15.972 | 99.99th=[ 279] 00:19:15.972 bw ( KiB/s): min=69632, max=119296, per=6.09%, avg=80385.25, stdev=10373.98, samples=20 00:19:15.972 iops : min= 272, max= 466, avg=313.90, stdev=40.52, samples=20 00:19:15.972 lat (msec) : 10=0.06%, 20=0.16%, 50=0.75%, 100=2.34%, 250=92.98% 00:19:15.972 lat (msec) : 500=3.72% 00:19:15.972 cpu : usr=0.83%, sys=0.89%, ctx=3074, majf=0, minf=1 00:19:15.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:15.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.972 issued rwts: total=0,3203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.972 job3: (groupid=0, jobs=1): err= 0: pid=90937: Fri Oct 4 06:37:07 2024 00:19:15.972 write: IOPS=456, BW=114MiB/s (120MB/s)(1151MiB/10092msec); 0 zone resets 00:19:15.972 slat (usec): min=23, max=22274, avg=2167.58, stdev=3703.61 00:19:15.972 clat (msec): min=29, max=236, avg=138.03, stdev=11.47 00:19:15.972 lat (msec): min=29, max=236, avg=140.20, stdev=11.08 00:19:15.972 clat percentiles (msec): 00:19:15.972 | 1.00th=[ 109], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 134], 00:19:15.972 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 140], 00:19:15.972 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 144], 95.00th=[ 148], 00:19:15.972 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 226], 99.95th=[ 226], 00:19:15.972 | 99.99th=[ 236] 00:19:15.972 bw ( KiB/s): min=102605, max=121344, per=8.81%, avg=116261.45, stdev=4902.10, samples=20 00:19:15.972 iops : min= 400, max= 474, avg=454.10, stdev=19.26, samples=20 00:19:15.972 lat (msec) : 50=0.26%, 100=0.54%, 250=99.20% 00:19:15.972 cpu : usr=1.22%, sys=1.47%, ctx=4864, majf=0, minf=1 00:19:15.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:15.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.972 issued rwts: total=0,4605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.972 job4: (groupid=0, jobs=1): err= 0: pid=90938: Fri Oct 4 06:37:07 2024 00:19:15.972 write: IOPS=474, BW=119MiB/s (124MB/s)(1196MiB/10083msec); 0 zone resets 00:19:15.972 slat (usec): min=19, max=24525, avg=1998.62, stdev=3586.53 00:19:15.972 clat (msec): min=10, max=197, avg=132.82, stdev=19.74 00:19:15.972 lat (msec): min=10, max=197, avg=134.82, stdev=19.94 00:19:15.972 clat percentiles (msec): 00:19:15.972 | 1.00th=[ 42], 5.00th=[ 90], 10.00th=[ 128], 20.00th=[ 131], 00:19:15.972 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 138], 00:19:15.972 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 144], 95.00th=[ 146], 00:19:15.972 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 178], 00:19:15.972 | 99.99th=[ 199] 00:19:15.972 bw ( KiB/s): min=114688, max=154112, per=9.15%, avg=120859.55, stdev=10546.97, samples=20 00:19:15.972 iops : min= 448, max= 602, avg=472.10, stdev=41.20, samples=20 00:19:15.972 lat (msec) : 20=0.08%, 50=1.50%, 100=4.18%, 250=94.23% 00:19:15.972 cpu : usr=1.01%, sys=1.51%, ctx=5454, majf=0, minf=1 00:19:15.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:15.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.972 issued rwts: total=0,4785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.972 job5: (groupid=0, jobs=1): err= 0: pid=90939: Fri Oct 4 06:37:07 2024 00:19:15.972 write: IOPS=308, BW=77.1MiB/s (80.8MB/s)(781MiB/10131msec); 0 zone resets 00:19:15.972 slat (usec): min=24, max=69780, avg=3197.30, stdev=6044.58 00:19:15.972 clat (msec): min=3, max=309, avg=204.34, stdev=28.97 00:19:15.972 lat (msec): min=3, max=309, avg=207.53, stdev=28.79 00:19:15.972 clat percentiles (msec): 00:19:15.972 | 1.00th=[ 41], 5.00th=[ 174], 10.00th=[ 186], 20.00th=[ 194], 00:19:15.972 | 30.00th=[ 199], 40.00th=[ 203], 50.00th=[ 207], 60.00th=[ 209], 00:19:15.972 | 70.00th=[ 215], 80.00th=[ 222], 90.00th=[ 234], 95.00th=[ 241], 00:19:15.972 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 296], 99.95th=[ 309], 00:19:15.972 | 99.99th=[ 309] 00:19:15.972 bw ( KiB/s): min=69493, max=95744, per=5.93%, avg=78321.85, stdev=6006.24, samples=20 00:19:15.972 iops : min= 271, max= 374, avg=305.90, stdev=23.52, samples=20 00:19:15.972 lat (msec) : 4=0.03%, 20=0.16%, 50=1.18%, 100=0.13%, 250=97.92% 00:19:15.972 lat (msec) : 500=0.58% 00:19:15.972 cpu : usr=0.94%, sys=0.96%, ctx=3312, majf=0, minf=1 00:19:15.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:15.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.972 issued rwts: total=0,3123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.973 job6: (groupid=0, jobs=1): err= 0: pid=90940: Fri Oct 4 06:37:07 2024 00:19:15.973 write: IOPS=329, BW=82.4MiB/s (86.4MB/s)(836MiB/10136msec); 0 zone resets 00:19:15.973 slat (usec): min=15, max=42619, avg=2815.86, stdev=5464.20 00:19:15.973 clat (usec): min=1494, max=295235, avg=191213.02, stdev=36177.49 00:19:15.973 lat (msec): min=2, max=295, avg=194.03, stdev=36.48 00:19:15.973 clat percentiles (msec): 00:19:15.973 | 1.00th=[ 26], 5.00th=[ 126], 10.00th=[ 165], 20.00th=[ 186], 00:19:15.973 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 201], 00:19:15.973 | 70.00th=[ 205], 80.00th=[ 211], 90.00th=[ 218], 95.00th=[ 226], 00:19:15.973 | 99.00th=[ 236], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 296], 00:19:15.973 | 99.99th=[ 296] 00:19:15.973 bw ( KiB/s): min=75624, max=121856, per=6.36%, avg=83918.60, stdev=11308.11, samples=20 00:19:15.973 iops : min= 295, max= 476, avg=327.75, stdev=44.20, samples=20 00:19:15.973 lat (msec) : 2=0.06%, 4=0.12%, 10=0.48%, 20=0.15%, 50=1.41% 00:19:15.973 lat (msec) : 100=2.00%, 250=95.03%, 500=0.75% 00:19:15.973 cpu : usr=0.85%, sys=1.07%, ctx=3028, majf=0, minf=1 00:19:15.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:15.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.973 issued rwts: total=0,3342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.973 job7: (groupid=0, jobs=1): err= 0: pid=90941: Fri Oct 4 06:37:07 2024 00:19:15.973 write: IOPS=529, BW=132MiB/s (139MB/s)(1348MiB/10173msec); 0 zone resets 00:19:15.973 slat (usec): min=18, max=64198, avg=1851.21, stdev=3310.00 00:19:15.973 clat (msec): min=70, max=371, avg=118.87, stdev=26.45 00:19:15.973 lat (msec): min=70, max=371, avg=120.72, stdev=26.55 00:19:15.973 clat percentiles (msec): 00:19:15.973 | 1.00th=[ 101], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 106], 00:19:15.973 | 30.00th=[ 108], 40.00th=[ 109], 50.00th=[ 109], 60.00th=[ 110], 00:19:15.973 | 70.00th=[ 111], 80.00th=[ 140], 90.00th=[ 148], 95.00th=[ 157], 00:19:15.973 | 99.00th=[ 207], 99.50th=[ 288], 99.90th=[ 363], 99.95th=[ 363], 00:19:15.973 | 99.99th=[ 372] 00:19:15.973 bw ( KiB/s): min=92672, max=153600, per=10.33%, avg=136353.15, stdev=20919.96, samples=20 00:19:15.973 iops : min= 362, max= 600, avg=532.50, stdev=81.71, samples=20 00:19:15.973 lat (msec) : 100=1.32%, 250=97.98%, 500=0.71% 00:19:15.973 cpu : usr=0.96%, sys=1.20%, ctx=7091, majf=0, minf=1 00:19:15.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:15.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.973 issued rwts: total=0,5390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.973 job8: (groupid=0, jobs=1): err= 0: pid=90942: Fri Oct 4 06:37:07 2024 00:19:15.973 write: IOPS=536, BW=134MiB/s (141MB/s)(1358MiB/10126msec); 0 zone resets 00:19:15.973 slat (usec): min=17, max=25916, avg=1817.51, stdev=3180.08 00:19:15.973 clat (msec): min=28, max=317, avg=117.42, stdev=23.01 00:19:15.973 lat (msec): min=28, max=317, avg=119.24, stdev=23.10 00:19:15.973 clat percentiles (msec): 00:19:15.973 | 1.00th=[ 99], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 105], 00:19:15.973 | 30.00th=[ 108], 40.00th=[ 109], 50.00th=[ 109], 60.00th=[ 110], 00:19:15.973 | 70.00th=[ 111], 80.00th=[ 140], 90.00th=[ 148], 95.00th=[ 155], 00:19:15.973 | 99.00th=[ 178], 99.50th=[ 236], 99.90th=[ 309], 99.95th=[ 309], 00:19:15.973 | 99.99th=[ 317] 00:19:15.973 bw ( KiB/s): min=104448, max=152064, per=10.41%, avg=137448.60, stdev=19138.27, samples=20 00:19:15.973 iops : min= 408, max= 594, avg=536.90, stdev=74.77, samples=20 00:19:15.973 lat (msec) : 50=0.33%, 100=1.69%, 250=97.50%, 500=0.48% 00:19:15.973 cpu : usr=0.95%, sys=1.60%, ctx=6940, majf=0, minf=1 00:19:15.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:15.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.973 issued rwts: total=0,5433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.973 job9: (groupid=0, jobs=1): err= 0: pid=90943: Fri Oct 4 06:37:07 2024 00:19:15.973 write: IOPS=320, BW=80.0MiB/s (83.9MB/s)(812MiB/10147msec); 0 zone resets 00:19:15.973 slat (usec): min=23, max=40441, avg=3033.59, stdev=5565.60 00:19:15.973 clat (msec): min=2, max=316, avg=196.75, stdev=24.26 00:19:15.973 lat (msec): min=2, max=316, avg=199.78, stdev=24.18 00:19:15.973 clat percentiles (msec): 00:19:15.973 | 1.00th=[ 83], 5.00th=[ 171], 10.00th=[ 182], 20.00th=[ 190], 00:19:15.973 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 203], 00:19:15.973 | 70.00th=[ 207], 80.00th=[ 211], 90.00th=[ 215], 95.00th=[ 218], 00:19:15.973 | 99.00th=[ 226], 99.50th=[ 268], 99.90th=[ 305], 99.95th=[ 317], 00:19:15.973 | 99.99th=[ 317] 00:19:15.973 bw ( KiB/s): min=75776, max=100864, per=6.18%, avg=81545.35, stdev=5067.70, samples=20 00:19:15.973 iops : min= 296, max= 394, avg=318.50, stdev=19.80, samples=20 00:19:15.973 lat (msec) : 4=0.12%, 50=0.37%, 100=1.26%, 250=97.57%, 500=0.68% 00:19:15.973 cpu : usr=0.99%, sys=1.02%, ctx=2641, majf=0, minf=1 00:19:15.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:15.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.973 issued rwts: total=0,3249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.973 job10: (groupid=0, jobs=1): err= 0: pid=90944: Fri Oct 4 06:37:07 2024 00:19:15.973 write: IOPS=335, BW=83.9MiB/s (88.0MB/s)(854MiB/10170msec); 0 zone resets 00:19:15.973 slat (usec): min=20, max=36796, avg=2877.41, stdev=5293.96 00:19:15.973 clat (msec): min=7, max=294, avg=187.64, stdev=35.42 00:19:15.973 lat (msec): min=8, max=294, avg=190.52, stdev=35.65 00:19:15.973 clat percentiles (msec): 00:19:15.973 | 1.00th=[ 31], 5.00th=[ 103], 10.00th=[ 178], 20.00th=[ 186], 00:19:15.973 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 199], 00:19:15.973 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 209], 95.00th=[ 211], 00:19:15.973 | 99.00th=[ 251], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 296], 00:19:15.973 | 99.99th=[ 296] 00:19:15.973 bw ( KiB/s): min=78336, max=130048, per=6.50%, avg=85803.00, stdev=11626.71, samples=20 00:19:15.973 iops : min= 306, max= 508, avg=335.15, stdev=45.42, samples=20 00:19:15.973 lat (msec) : 10=0.12%, 20=0.50%, 50=1.26%, 100=3.07%, 250=93.94% 00:19:15.973 lat (msec) : 500=1.11% 00:19:15.973 cpu : usr=0.90%, sys=0.89%, ctx=2432, majf=0, minf=1 00:19:15.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:15.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:15.973 issued rwts: total=0,3415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:15.973 00:19:15.973 Run status group 0 (all jobs): 00:19:15.973 WRITE: bw=1289MiB/s (1352MB/s), 77.1MiB/s-281MiB/s (80.8MB/s-295MB/s), io=12.8GiB (13.8GB), run=10058-10173msec 00:19:15.973 00:19:15.973 Disk stats (read/write): 00:19:15.973 nvme0n1: ios=49/22485, merge=0/0, ticks=66/1215436, in_queue=1215502, util=97.73% 00:19:15.973 nvme10n1: ios=49/8997, merge=0/0, ticks=58/1210731, in_queue=1210789, util=97.78% 00:19:15.973 nvme1n1: ios=41/6261, merge=0/0, ticks=41/1210649, in_queue=1210690, util=97.92% 00:19:15.973 nvme2n1: ios=33/9056, merge=0/0, ticks=19/1210427, in_queue=1210446, util=98.04% 00:19:15.973 nvme3n1: ios=25/9371, merge=0/0, ticks=28/1214918, in_queue=1214946, util=98.02% 00:19:15.973 nvme4n1: ios=0/6097, merge=0/0, ticks=0/1205955, in_queue=1205955, util=98.16% 00:19:15.973 nvme5n1: ios=0/6533, merge=0/0, ticks=0/1210015, in_queue=1210015, util=98.37% 00:19:15.973 nvme6n1: ios=0/10637, merge=0/0, ticks=0/1206688, in_queue=1206688, util=98.39% 00:19:15.973 nvme7n1: ios=0/10704, merge=0/0, ticks=0/1208228, in_queue=1208228, util=98.76% 00:19:15.973 nvme8n1: ios=0/6359, merge=0/0, ticks=0/1209956, in_queue=1209956, util=98.95% 00:19:15.973 nvme9n1: ios=0/6676, merge=0/0, ticks=0/1210368, in_queue=1210368, util=98.80% 00:19:15.973 06:37:07 -- target/multiconnection.sh@36 -- # sync 00:19:15.973 06:37:07 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:15.973 06:37:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.973 06:37:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.973 06:37:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:15.973 06:37:07 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.973 06:37:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.973 06:37:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:19:15.973 06:37:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.973 06:37:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:19:15.973 06:37:07 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.973 06:37:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.973 06:37:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.973 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:19:15.973 06:37:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.973 06:37:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.973 06:37:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:15.973 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:15.973 06:37:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:15.973 06:37:07 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.973 06:37:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.973 06:37:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:19:15.973 06:37:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.973 06:37:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:19:15.973 06:37:07 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.973 06:37:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:15.973 06:37:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.973 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:19:15.973 06:37:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.973 06:37:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.973 06:37:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:15.973 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:15.973 06:37:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:15.974 06:37:07 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:19:15.974 06:37:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:19:15.974 06:37:07 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:15.974 06:37:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:15.974 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:15.974 06:37:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:15.974 06:37:07 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:19:15.974 06:37:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:19:15.974 06:37:07 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:15.974 06:37:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:15.974 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:15.974 06:37:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:15.974 06:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:19:15.974 06:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:15.974 06:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:15.974 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:15.974 06:37:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:15.974 06:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:19:15.974 06:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:15.974 06:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:15.974 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:15.974 06:37:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:15.974 06:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:19:15.974 06:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:15.974 06:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:15.974 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:15.974 06:37:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:15.974 06:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:15.974 06:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:15.974 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:15.974 06:37:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:15.974 06:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:19:15.974 06:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:15.974 06:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:15.974 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:15.974 06:37:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:15.974 06:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:15.974 06:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:19:15.974 06:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:19:15.974 06:37:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:15.974 06:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.974 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.974 06:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.974 06:37:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:15.974 06:37:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:16.234 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:16.234 06:37:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:16.234 06:37:08 -- common/autotest_common.sh@1198 -- # local i=0 00:19:16.234 06:37:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:16.234 06:37:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:19:16.234 06:37:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:16.234 06:37:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:19:16.234 06:37:08 -- common/autotest_common.sh@1210 -- # return 0 00:19:16.234 06:37:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:16.234 06:37:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.234 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:19:16.234 06:37:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.234 06:37:08 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:16.234 06:37:08 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:16.234 06:37:08 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:16.234 06:37:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.234 06:37:08 -- nvmf/common.sh@116 -- # sync 00:19:16.234 06:37:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:16.234 06:37:08 -- nvmf/common.sh@119 -- # set +e 00:19:16.234 06:37:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.234 06:37:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:16.234 rmmod nvme_tcp 00:19:16.234 rmmod nvme_fabrics 00:19:16.234 rmmod nvme_keyring 00:19:16.234 06:37:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.234 06:37:08 -- nvmf/common.sh@123 -- # set -e 00:19:16.234 06:37:08 -- nvmf/common.sh@124 -- # return 0 00:19:16.234 06:37:08 -- nvmf/common.sh@477 -- # '[' -n 90239 ']' 00:19:16.234 06:37:08 -- nvmf/common.sh@478 -- # killprocess 90239 00:19:16.234 06:37:08 -- common/autotest_common.sh@926 -- # '[' -z 90239 ']' 00:19:16.234 06:37:08 -- common/autotest_common.sh@930 -- # kill -0 90239 00:19:16.234 06:37:08 -- common/autotest_common.sh@931 -- # uname 00:19:16.234 06:37:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:16.234 06:37:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90239 00:19:16.234 killing process with pid 90239 00:19:16.234 06:37:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:16.234 06:37:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:16.234 06:37:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90239' 00:19:16.234 06:37:08 -- common/autotest_common.sh@945 -- # kill 90239 00:19:16.234 06:37:08 -- common/autotest_common.sh@950 -- # wait 90239 00:19:16.802 06:37:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:16.802 06:37:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:16.802 06:37:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:16.802 06:37:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.802 06:37:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:16.802 06:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.802 06:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.802 06:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.062 06:37:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:17.062 00:19:17.062 real 0m50.166s 00:19:17.062 user 2m47.556s 00:19:17.062 sys 0m24.915s 00:19:17.062 06:37:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.062 06:37:09 -- common/autotest_common.sh@10 -- # set +x 00:19:17.062 ************************************ 00:19:17.062 END TEST nvmf_multiconnection 00:19:17.062 ************************************ 00:19:17.062 06:37:09 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:17.062 06:37:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:17.062 06:37:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:17.062 06:37:09 -- common/autotest_common.sh@10 -- # set +x 00:19:17.062 ************************************ 00:19:17.062 START TEST nvmf_initiator_timeout 00:19:17.062 ************************************ 00:19:17.062 06:37:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:17.062 * Looking for test storage... 00:19:17.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:17.062 06:37:09 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.062 06:37:09 -- nvmf/common.sh@7 -- # uname -s 00:19:17.062 06:37:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.062 06:37:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.062 06:37:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.062 06:37:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.062 06:37:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.062 06:37:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.062 06:37:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.062 06:37:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.062 06:37:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.062 06:37:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.062 06:37:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:19:17.062 06:37:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:19:17.062 06:37:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.062 06:37:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.062 06:37:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.062 06:37:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.062 06:37:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.062 06:37:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.062 06:37:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.062 06:37:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.062 06:37:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.062 06:37:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.062 06:37:09 -- paths/export.sh@5 -- # export PATH 00:19:17.062 06:37:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.062 06:37:09 -- nvmf/common.sh@46 -- # : 0 00:19:17.062 06:37:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:17.062 06:37:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:17.062 06:37:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:17.062 06:37:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.062 06:37:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.062 06:37:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:17.062 06:37:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:17.062 06:37:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:17.062 06:37:09 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.062 06:37:09 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.062 06:37:09 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:17.062 06:37:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:17.062 06:37:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.062 06:37:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:17.062 06:37:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:17.062 06:37:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:17.062 06:37:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.062 06:37:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.062 06:37:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.062 06:37:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:17.062 06:37:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:17.062 06:37:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:17.062 06:37:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:17.062 06:37:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:17.062 06:37:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:17.062 06:37:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.062 06:37:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.062 06:37:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:17.062 06:37:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:17.062 06:37:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.062 06:37:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.062 06:37:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.062 06:37:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.062 06:37:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.063 06:37:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.063 06:37:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.063 06:37:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.063 06:37:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:17.063 06:37:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:17.063 Cannot find device "nvmf_tgt_br" 00:19:17.063 06:37:09 -- nvmf/common.sh@154 -- # true 00:19:17.063 06:37:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.063 Cannot find device "nvmf_tgt_br2" 00:19:17.063 06:37:09 -- nvmf/common.sh@155 -- # true 00:19:17.063 06:37:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:17.063 06:37:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:17.063 Cannot find device "nvmf_tgt_br" 00:19:17.063 06:37:09 -- nvmf/common.sh@157 -- # true 00:19:17.063 06:37:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:17.063 Cannot find device "nvmf_tgt_br2" 00:19:17.322 06:37:09 -- nvmf/common.sh@158 -- # true 00:19:17.322 06:37:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:17.322 06:37:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:17.322 06:37:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.322 06:37:09 -- nvmf/common.sh@161 -- # true 00:19:17.322 06:37:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.322 06:37:09 -- nvmf/common.sh@162 -- # true 00:19:17.322 06:37:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.322 06:37:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.322 06:37:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.322 06:37:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.322 06:37:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.322 06:37:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.322 06:37:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.322 06:37:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:17.322 06:37:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:17.322 06:37:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:17.322 06:37:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:17.322 06:37:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:17.322 06:37:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:17.322 06:37:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.322 06:37:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.322 06:37:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.322 06:37:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:17.322 06:37:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:17.322 06:37:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.322 06:37:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.322 06:37:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.322 06:37:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.322 06:37:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.322 06:37:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:17.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:19:17.322 00:19:17.322 --- 10.0.0.2 ping statistics --- 00:19:17.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.322 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:17.322 06:37:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:17.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:19:17.322 00:19:17.322 --- 10.0.0.3 ping statistics --- 00:19:17.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.322 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:17.322 06:37:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:19:17.322 00:19:17.322 --- 10.0.0.1 ping statistics --- 00:19:17.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.322 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:17.322 06:37:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.322 06:37:09 -- nvmf/common.sh@421 -- # return 0 00:19:17.322 06:37:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:17.322 06:37:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.322 06:37:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:17.322 06:37:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:17.322 06:37:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.322 06:37:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:17.322 06:37:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:17.581 06:37:10 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:17.581 06:37:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:17.581 06:37:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:17.581 06:37:10 -- common/autotest_common.sh@10 -- # set +x 00:19:17.581 06:37:10 -- nvmf/common.sh@469 -- # nvmfpid=91316 00:19:17.581 06:37:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:17.581 06:37:10 -- nvmf/common.sh@470 -- # waitforlisten 91316 00:19:17.581 06:37:10 -- common/autotest_common.sh@819 -- # '[' -z 91316 ']' 00:19:17.581 06:37:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.581 06:37:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:17.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.581 06:37:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.581 06:37:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:17.581 06:37:10 -- common/autotest_common.sh@10 -- # set +x 00:19:17.581 [2024-10-04 06:37:10.073646] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:19:17.581 [2024-10-04 06:37:10.073748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.581 [2024-10-04 06:37:10.213641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.839 [2024-10-04 06:37:10.293599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:17.839 [2024-10-04 06:37:10.293789] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.839 [2024-10-04 06:37:10.293804] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.839 [2024-10-04 06:37:10.293829] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.839 [2024-10-04 06:37:10.293965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.839 [2024-10-04 06:37:10.294482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.839 [2024-10-04 06:37:10.295023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.839 [2024-10-04 06:37:10.295037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.406 06:37:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:18.406 06:37:10 -- common/autotest_common.sh@852 -- # return 0 00:19:18.406 06:37:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:18.406 06:37:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:18.406 06:37:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.406 06:37:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.406 06:37:11 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:18.406 06:37:11 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:18.406 06:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.406 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.406 Malloc0 00:19:18.406 06:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.406 06:37:11 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:18.406 06:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.406 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.406 Delay0 00:19:18.406 06:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.406 06:37:11 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.406 06:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.406 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.406 [2024-10-04 06:37:11.070448] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.406 06:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.406 06:37:11 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:18.406 06:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.406 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.665 06:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.665 06:37:11 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:18.665 06:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.665 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.665 06:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.665 06:37:11 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.665 06:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.665 06:37:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.665 [2024-10-04 06:37:11.098639] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.665 06:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.665 06:37:11 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:18.665 06:37:11 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:18.665 06:37:11 -- common/autotest_common.sh@1177 -- # local i=0 00:19:18.665 06:37:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:18.665 06:37:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:18.665 06:37:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:21.197 06:37:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:21.197 06:37:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:21.197 06:37:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:21.197 06:37:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:21.197 06:37:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.197 06:37:13 -- common/autotest_common.sh@1187 -- # return 0 00:19:21.197 06:37:13 -- target/initiator_timeout.sh@35 -- # fio_pid=91398 00:19:21.197 06:37:13 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:21.197 06:37:13 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:21.197 [global] 00:19:21.197 thread=1 00:19:21.197 invalidate=1 00:19:21.197 rw=write 00:19:21.197 time_based=1 00:19:21.197 runtime=60 00:19:21.197 ioengine=libaio 00:19:21.197 direct=1 00:19:21.197 bs=4096 00:19:21.197 iodepth=1 00:19:21.197 norandommap=0 00:19:21.197 numjobs=1 00:19:21.197 00:19:21.197 verify_dump=1 00:19:21.197 verify_backlog=512 00:19:21.197 verify_state_save=0 00:19:21.197 do_verify=1 00:19:21.197 verify=crc32c-intel 00:19:21.197 [job0] 00:19:21.197 filename=/dev/nvme0n1 00:19:21.197 Could not set queue depth (nvme0n1) 00:19:21.197 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.197 fio-3.35 00:19:21.197 Starting 1 thread 00:19:23.755 06:37:16 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:23.755 06:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.755 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:19:23.755 true 00:19:23.755 06:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.755 06:37:16 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:23.755 06:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.755 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:19:23.755 true 00:19:23.755 06:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.755 06:37:16 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:23.755 06:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.755 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:19:23.755 true 00:19:23.755 06:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.755 06:37:16 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:23.755 06:37:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.755 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:19:23.755 true 00:19:23.755 06:37:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.755 06:37:16 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:27.041 06:37:19 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:27.041 06:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.042 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:19:27.042 true 00:19:27.042 06:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.042 06:37:19 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:27.042 06:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.042 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:19:27.042 true 00:19:27.042 06:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.042 06:37:19 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:27.042 06:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.042 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:19:27.042 true 00:19:27.042 06:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.042 06:37:19 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:27.042 06:37:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.042 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:19:27.042 true 00:19:27.042 06:37:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.042 06:37:19 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:27.042 06:37:19 -- target/initiator_timeout.sh@54 -- # wait 91398 00:20:23.266 00:20:23.266 job0: (groupid=0, jobs=1): err= 0: pid=91419: Fri Oct 4 06:38:13 2024 00:20:23.266 read: IOPS=784, BW=3138KiB/s (3213kB/s)(184MiB/60000msec) 00:20:23.266 slat (usec): min=11, max=137, avg=16.18, stdev= 5.42 00:20:23.266 clat (usec): min=93, max=897, avg=205.14, stdev=24.17 00:20:23.266 lat (usec): min=173, max=928, avg=221.32, stdev=25.00 00:20:23.266 clat percentiles (usec): 00:20:23.266 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:20:23.266 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:20:23.266 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:20:23.266 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[ 359], 99.95th=[ 388], 00:20:23.266 | 99.99th=[ 668] 00:20:23.266 write: IOPS=785, BW=3140KiB/s (3216kB/s)(184MiB/60000msec); 0 zone resets 00:20:23.266 slat (usec): min=18, max=9246, avg=25.30, stdev=56.17 00:20:23.266 clat (usec): min=101, max=40639k, avg=1023.58, stdev=187243.76 00:20:23.266 lat (usec): min=143, max=40639k, avg=1048.88, stdev=187243.75 00:20:23.266 clat percentiles (usec): 00:20:23.266 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:20:23.266 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:20:23.266 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 194], 00:20:23.266 | 99.00th=[ 219], 99.50th=[ 235], 99.90th=[ 285], 99.95th=[ 355], 00:20:23.266 | 99.99th=[ 725] 00:20:23.266 bw ( KiB/s): min= 2376, max=12144, per=100.00%, avg=9450.77, stdev=1752.26, samples=39 00:20:23.266 iops : min= 594, max= 3036, avg=2362.69, stdev=438.07, samples=39 00:20:23.266 lat (usec) : 100=0.01%, 250=97.94%, 500=2.02%, 750=0.02%, 1000=0.01% 00:20:23.266 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:23.266 cpu : usr=0.58%, sys=2.38%, ctx=94232, majf=0, minf=5 00:20:23.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:23.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.266 issued rwts: total=47072,47104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:23.266 00:20:23.266 Run status group 0 (all jobs): 00:20:23.266 READ: bw=3138KiB/s (3213kB/s), 3138KiB/s-3138KiB/s (3213kB/s-3213kB/s), io=184MiB (193MB), run=60000-60000msec 00:20:23.266 WRITE: bw=3140KiB/s (3216kB/s), 3140KiB/s-3140KiB/s (3216kB/s-3216kB/s), io=184MiB (193MB), run=60000-60000msec 00:20:23.266 00:20:23.266 Disk stats (read/write): 00:20:23.266 nvme0n1: ios=46919/47104, merge=0/0, ticks=10016/8167, in_queue=18183, util=99.93% 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:23.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:23.266 06:38:13 -- common/autotest_common.sh@1198 -- # local i=0 00:20:23.266 06:38:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:23.266 06:38:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:23.266 06:38:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:23.266 06:38:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:23.266 06:38:13 -- common/autotest_common.sh@1210 -- # return 0 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:23.266 nvmf hotplug test: fio successful as expected 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.266 06:38:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.266 06:38:13 -- common/autotest_common.sh@10 -- # set +x 00:20:23.266 06:38:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:23.266 06:38:13 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:23.266 06:38:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:23.266 06:38:13 -- nvmf/common.sh@116 -- # sync 00:20:23.266 06:38:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:23.266 06:38:13 -- nvmf/common.sh@119 -- # set +e 00:20:23.266 06:38:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:23.266 06:38:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:23.266 rmmod nvme_tcp 00:20:23.266 rmmod nvme_fabrics 00:20:23.266 rmmod nvme_keyring 00:20:23.266 06:38:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:23.266 06:38:13 -- nvmf/common.sh@123 -- # set -e 00:20:23.266 06:38:13 -- nvmf/common.sh@124 -- # return 0 00:20:23.266 06:38:13 -- nvmf/common.sh@477 -- # '[' -n 91316 ']' 00:20:23.266 06:38:13 -- nvmf/common.sh@478 -- # killprocess 91316 00:20:23.266 06:38:13 -- common/autotest_common.sh@926 -- # '[' -z 91316 ']' 00:20:23.266 06:38:13 -- common/autotest_common.sh@930 -- # kill -0 91316 00:20:23.266 06:38:13 -- common/autotest_common.sh@931 -- # uname 00:20:23.266 06:38:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:23.266 06:38:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91316 00:20:23.266 06:38:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:23.266 06:38:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:23.266 killing process with pid 91316 00:20:23.266 06:38:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91316' 00:20:23.266 06:38:13 -- common/autotest_common.sh@945 -- # kill 91316 00:20:23.266 06:38:13 -- common/autotest_common.sh@950 -- # wait 91316 00:20:23.266 06:38:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:23.266 06:38:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:23.266 06:38:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:23.266 06:38:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.266 06:38:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:23.266 06:38:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.266 06:38:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.266 06:38:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.266 06:38:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:23.266 00:20:23.266 real 1m4.675s 00:20:23.266 user 4m7.589s 00:20:23.266 sys 0m7.932s 00:20:23.266 06:38:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.266 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:20:23.266 ************************************ 00:20:23.266 END TEST nvmf_initiator_timeout 00:20:23.266 ************************************ 00:20:23.266 06:38:14 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:23.266 06:38:14 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:23.266 06:38:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:23.266 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:20:23.266 06:38:14 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:23.266 06:38:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:23.266 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:20:23.266 06:38:14 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:23.266 06:38:14 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:23.266 06:38:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:23.266 06:38:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:23.266 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:20:23.266 ************************************ 00:20:23.266 START TEST nvmf_multicontroller 00:20:23.266 ************************************ 00:20:23.266 06:38:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:23.266 * Looking for test storage... 00:20:23.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.266 06:38:14 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.266 06:38:14 -- nvmf/common.sh@7 -- # uname -s 00:20:23.266 06:38:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.266 06:38:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.266 06:38:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.266 06:38:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.266 06:38:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.266 06:38:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.266 06:38:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.266 06:38:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.266 06:38:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.266 06:38:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.266 06:38:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:23.266 06:38:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:23.266 06:38:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.266 06:38:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.266 06:38:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.266 06:38:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.266 06:38:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.266 06:38:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.266 06:38:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.266 06:38:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.267 06:38:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.267 06:38:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.267 06:38:14 -- paths/export.sh@5 -- # export PATH 00:20:23.267 06:38:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.267 06:38:14 -- nvmf/common.sh@46 -- # : 0 00:20:23.267 06:38:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:23.267 06:38:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:23.267 06:38:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:23.267 06:38:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.267 06:38:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.267 06:38:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:23.267 06:38:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:23.267 06:38:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:23.267 06:38:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.267 06:38:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.267 06:38:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:23.267 06:38:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:23.267 06:38:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.267 06:38:14 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:23.267 06:38:14 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:23.267 06:38:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:23.267 06:38:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.267 06:38:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:23.267 06:38:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:23.267 06:38:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:23.267 06:38:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.267 06:38:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.267 06:38:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.267 06:38:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:23.267 06:38:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:23.267 06:38:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:23.267 06:38:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:23.267 06:38:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:23.267 06:38:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:23.267 06:38:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.267 06:38:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.267 06:38:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.267 06:38:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:23.267 06:38:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.267 06:38:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.267 06:38:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.267 06:38:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.267 06:38:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.267 06:38:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.267 06:38:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.267 06:38:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.267 06:38:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:23.267 06:38:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:23.267 Cannot find device "nvmf_tgt_br" 00:20:23.267 06:38:14 -- nvmf/common.sh@154 -- # true 00:20:23.267 06:38:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.267 Cannot find device "nvmf_tgt_br2" 00:20:23.267 06:38:14 -- nvmf/common.sh@155 -- # true 00:20:23.267 06:38:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:23.267 06:38:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:23.267 Cannot find device "nvmf_tgt_br" 00:20:23.267 06:38:14 -- nvmf/common.sh@157 -- # true 00:20:23.267 06:38:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:23.267 Cannot find device "nvmf_tgt_br2" 00:20:23.267 06:38:14 -- nvmf/common.sh@158 -- # true 00:20:23.267 06:38:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:23.267 06:38:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:23.267 06:38:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.267 06:38:14 -- nvmf/common.sh@161 -- # true 00:20:23.267 06:38:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.267 06:38:14 -- nvmf/common.sh@162 -- # true 00:20:23.267 06:38:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.267 06:38:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.267 06:38:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.267 06:38:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.267 06:38:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.267 06:38:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.267 06:38:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.267 06:38:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.267 06:38:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.267 06:38:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:23.267 06:38:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:23.267 06:38:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:23.267 06:38:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:23.267 06:38:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.267 06:38:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.267 06:38:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.267 06:38:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:23.267 06:38:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:23.267 06:38:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.267 06:38:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.267 06:38:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.267 06:38:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.267 06:38:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.267 06:38:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:23.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:23.267 00:20:23.267 --- 10.0.0.2 ping statistics --- 00:20:23.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.267 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:23.267 06:38:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:23.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:23.267 00:20:23.267 --- 10.0.0.3 ping statistics --- 00:20:23.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.267 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:23.267 06:38:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:20:23.267 00:20:23.267 --- 10.0.0.1 ping statistics --- 00:20:23.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.267 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:23.267 06:38:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.267 06:38:14 -- nvmf/common.sh@421 -- # return 0 00:20:23.267 06:38:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:23.267 06:38:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.267 06:38:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:23.267 06:38:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:23.267 06:38:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.267 06:38:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:23.267 06:38:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:23.267 06:38:14 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:23.267 06:38:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:23.267 06:38:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:23.267 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:20:23.267 06:38:14 -- nvmf/common.sh@469 -- # nvmfpid=92247 00:20:23.267 06:38:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:23.267 06:38:14 -- nvmf/common.sh@470 -- # waitforlisten 92247 00:20:23.267 06:38:14 -- common/autotest_common.sh@819 -- # '[' -z 92247 ']' 00:20:23.267 06:38:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.267 06:38:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:23.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.267 06:38:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.267 06:38:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:23.267 06:38:14 -- common/autotest_common.sh@10 -- # set +x 00:20:23.267 [2024-10-04 06:38:14.833832] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:23.267 [2024-10-04 06:38:14.834445] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.267 [2024-10-04 06:38:14.973228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:23.267 [2024-10-04 06:38:15.052895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:23.267 [2024-10-04 06:38:15.053074] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.267 [2024-10-04 06:38:15.053088] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.268 [2024-10-04 06:38:15.053096] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.268 [2024-10-04 06:38:15.053366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.268 [2024-10-04 06:38:15.053746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.268 [2024-10-04 06:38:15.053780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.268 06:38:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.268 06:38:15 -- common/autotest_common.sh@852 -- # return 0 00:20:23.268 06:38:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:23.268 06:38:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:23.268 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.268 06:38:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.268 06:38:15 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.268 06:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.268 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.268 [2024-10-04 06:38:15.934004] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.268 06:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.268 06:38:15 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.268 06:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.268 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 Malloc0 00:20:23.527 06:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:15 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.527 06:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 06:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:15 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:23.527 06:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 06:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:15 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.527 06:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 [2024-10-04 06:38:15.994584] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.527 06:38:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:15 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:23.527 06:38:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 [2024-10-04 06:38:16.002470] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:23.527 06:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:16 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:23.527 06:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 Malloc1 00:20:23.527 06:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:16 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:23.527 06:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 06:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:16 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:23.527 06:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 06:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:16 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:23.527 06:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 06:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:16 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:23.527 06:38:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.527 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:20:23.527 06:38:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.527 06:38:16 -- host/multicontroller.sh@44 -- # bdevperf_pid=92305 00:20:23.527 06:38:16 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:23.527 06:38:16 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.527 06:38:16 -- host/multicontroller.sh@47 -- # waitforlisten 92305 /var/tmp/bdevperf.sock 00:20:23.527 06:38:16 -- common/autotest_common.sh@819 -- # '[' -z 92305 ']' 00:20:23.527 06:38:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.527 06:38:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:23.527 06:38:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.527 06:38:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:23.527 06:38:16 -- common/autotest_common.sh@10 -- # set +x 00:20:24.461 06:38:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:24.461 06:38:17 -- common/autotest_common.sh@852 -- # return 0 00:20:24.461 06:38:17 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:24.461 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.461 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 NVMe0n1 00:20:24.720 06:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.720 06:38:17 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:24.720 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 06:38:17 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:24.720 06:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.720 1 00:20:24.720 06:38:17 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:24.720 06:38:17 -- common/autotest_common.sh@640 -- # local es=0 00:20:24.720 06:38:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:24.720 06:38:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:24.720 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 2024/10/04 06:38:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:24.720 request: 00:20:24.720 { 00:20:24.720 "method": "bdev_nvme_attach_controller", 00:20:24.720 "params": { 00:20:24.720 "name": "NVMe0", 00:20:24.720 "trtype": "tcp", 00:20:24.720 "traddr": "10.0.0.2", 00:20:24.720 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:24.720 "hostaddr": "10.0.0.2", 00:20:24.720 "hostsvcid": "60000", 00:20:24.720 "adrfam": "ipv4", 00:20:24.720 "trsvcid": "4420", 00:20:24.720 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:24.720 } 00:20:24.720 } 00:20:24.720 Got JSON-RPC error response 00:20:24.720 GoRPCClient: error on JSON-RPC call 00:20:24.720 06:38:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # es=1 00:20:24.720 06:38:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:24.720 06:38:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:24.720 06:38:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:24.720 06:38:17 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:24.720 06:38:17 -- common/autotest_common.sh@640 -- # local es=0 00:20:24.720 06:38:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:24.720 06:38:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:24.720 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 2024/10/04 06:38:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:24.720 request: 00:20:24.720 { 00:20:24.720 "method": "bdev_nvme_attach_controller", 00:20:24.720 "params": { 00:20:24.720 "name": "NVMe0", 00:20:24.720 "trtype": "tcp", 00:20:24.720 "traddr": "10.0.0.2", 00:20:24.720 "hostaddr": "10.0.0.2", 00:20:24.720 "hostsvcid": "60000", 00:20:24.720 "adrfam": "ipv4", 00:20:24.720 "trsvcid": "4420", 00:20:24.720 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:24.720 } 00:20:24.720 } 00:20:24.720 Got JSON-RPC error response 00:20:24.720 GoRPCClient: error on JSON-RPC call 00:20:24.720 06:38:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # es=1 00:20:24.720 06:38:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:24.720 06:38:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:24.720 06:38:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:24.720 06:38:17 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@640 -- # local es=0 00:20:24.720 06:38:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 2024/10/04 06:38:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:24.720 request: 00:20:24.720 { 00:20:24.720 "method": "bdev_nvme_attach_controller", 00:20:24.720 "params": { 00:20:24.720 "name": "NVMe0", 00:20:24.720 "trtype": "tcp", 00:20:24.720 "traddr": "10.0.0.2", 00:20:24.720 "hostaddr": "10.0.0.2", 00:20:24.720 "hostsvcid": "60000", 00:20:24.720 "adrfam": "ipv4", 00:20:24.720 "trsvcid": "4420", 00:20:24.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.720 "multipath": "disable" 00:20:24.720 } 00:20:24.720 } 00:20:24.720 Got JSON-RPC error response 00:20:24.720 GoRPCClient: error on JSON-RPC call 00:20:24.720 06:38:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # es=1 00:20:24.720 06:38:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:24.720 06:38:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:24.720 06:38:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:24.720 06:38:17 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:24.720 06:38:17 -- common/autotest_common.sh@640 -- # local es=0 00:20:24.720 06:38:17 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:24.720 06:38:17 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:20:24.720 06:38:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:24.720 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.720 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.720 2024/10/04 06:38:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:24.720 request: 00:20:24.720 { 00:20:24.720 "method": "bdev_nvme_attach_controller", 00:20:24.720 "params": { 00:20:24.720 "name": "NVMe0", 00:20:24.720 "trtype": "tcp", 00:20:24.720 "traddr": "10.0.0.2", 00:20:24.720 "hostaddr": "10.0.0.2", 00:20:24.720 "hostsvcid": "60000", 00:20:24.720 "adrfam": "ipv4", 00:20:24.720 "trsvcid": "4420", 00:20:24.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.720 "multipath": "failover" 00:20:24.720 } 00:20:24.720 } 00:20:24.720 Got JSON-RPC error response 00:20:24.720 GoRPCClient: error on JSON-RPC call 00:20:24.720 06:38:17 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:24.720 06:38:17 -- common/autotest_common.sh@643 -- # es=1 00:20:24.720 06:38:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:24.721 06:38:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:24.721 06:38:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:24.721 06:38:17 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:24.721 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.721 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.721 00:20:24.721 06:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.721 06:38:17 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:24.721 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.721 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.721 06:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.721 06:38:17 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:24.721 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.721 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.721 00:20:24.721 06:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.721 06:38:17 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:24.721 06:38:17 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:24.721 06:38:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:24.721 06:38:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.721 06:38:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:24.721 06:38:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:24.721 06:38:17 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.096 0 00:20:26.096 06:38:18 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:26.096 06:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.096 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:20:26.096 06:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.096 06:38:18 -- host/multicontroller.sh@100 -- # killprocess 92305 00:20:26.096 06:38:18 -- common/autotest_common.sh@926 -- # '[' -z 92305 ']' 00:20:26.096 06:38:18 -- common/autotest_common.sh@930 -- # kill -0 92305 00:20:26.096 06:38:18 -- common/autotest_common.sh@931 -- # uname 00:20:26.096 06:38:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.096 06:38:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92305 00:20:26.096 06:38:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:26.096 06:38:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:26.096 killing process with pid 92305 00:20:26.096 06:38:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92305' 00:20:26.096 06:38:18 -- common/autotest_common.sh@945 -- # kill 92305 00:20:26.096 06:38:18 -- common/autotest_common.sh@950 -- # wait 92305 00:20:26.354 06:38:18 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.354 06:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.354 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:20:26.354 06:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.354 06:38:18 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:26.354 06:38:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:26.354 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:20:26.354 06:38:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:26.354 06:38:18 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:26.354 06:38:18 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.354 06:38:18 -- common/autotest_common.sh@1597 -- # read -r file 00:20:26.354 06:38:18 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:26.354 06:38:18 -- common/autotest_common.sh@1596 -- # sort -u 00:20:26.354 06:38:18 -- common/autotest_common.sh@1598 -- # cat 00:20:26.354 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:26.354 [2024-10-04 06:38:16.130061] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:26.354 [2024-10-04 06:38:16.130194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92305 ] 00:20:26.354 [2024-10-04 06:38:16.270647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.354 [2024-10-04 06:38:16.360451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.354 [2024-10-04 06:38:17.370487] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 088ca8d4-70b1-442f-aac9-befb0e4434ee already exists 00:20:26.354 [2024-10-04 06:38:17.370545] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:088ca8d4-70b1-442f-aac9-befb0e4434ee alias for bdev NVMe1n1 00:20:26.354 [2024-10-04 06:38:17.370581] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:26.354 Running I/O for 1 seconds... 00:20:26.354 00:20:26.354 Latency(us) 00:20:26.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.354 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:26.354 NVMe0n1 : 1.00 21911.40 85.59 0.00 0.00 5829.54 2874.65 10843.23 00:20:26.354 =================================================================================================================== 00:20:26.354 Total : 21911.40 85.59 0.00 0.00 5829.54 2874.65 10843.23 00:20:26.354 Received shutdown signal, test time was about 1.000000 seconds 00:20:26.354 00:20:26.354 Latency(us) 00:20:26.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.354 =================================================================================================================== 00:20:26.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.354 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:26.354 06:38:18 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.354 06:38:18 -- common/autotest_common.sh@1597 -- # read -r file 00:20:26.354 06:38:18 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:26.354 06:38:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:26.354 06:38:18 -- nvmf/common.sh@116 -- # sync 00:20:26.354 06:38:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:26.354 06:38:18 -- nvmf/common.sh@119 -- # set +e 00:20:26.354 06:38:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:26.354 06:38:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:26.354 rmmod nvme_tcp 00:20:26.354 rmmod nvme_fabrics 00:20:26.354 rmmod nvme_keyring 00:20:26.354 06:38:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:26.354 06:38:18 -- nvmf/common.sh@123 -- # set -e 00:20:26.354 06:38:18 -- nvmf/common.sh@124 -- # return 0 00:20:26.354 06:38:18 -- nvmf/common.sh@477 -- # '[' -n 92247 ']' 00:20:26.354 06:38:18 -- nvmf/common.sh@478 -- # killprocess 92247 00:20:26.354 06:38:18 -- common/autotest_common.sh@926 -- # '[' -z 92247 ']' 00:20:26.354 06:38:18 -- common/autotest_common.sh@930 -- # kill -0 92247 00:20:26.354 06:38:18 -- common/autotest_common.sh@931 -- # uname 00:20:26.354 06:38:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.354 06:38:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92247 00:20:26.354 06:38:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:26.354 06:38:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:26.354 killing process with pid 92247 00:20:26.354 06:38:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92247' 00:20:26.354 06:38:19 -- common/autotest_common.sh@945 -- # kill 92247 00:20:26.354 06:38:19 -- common/autotest_common.sh@950 -- # wait 92247 00:20:26.613 06:38:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:26.613 06:38:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:26.613 06:38:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:26.613 06:38:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.613 06:38:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:26.613 06:38:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.613 06:38:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.613 06:38:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.873 06:38:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:26.873 00:20:26.873 real 0m4.989s 00:20:26.873 user 0m15.774s 00:20:26.873 sys 0m1.063s 00:20:26.873 06:38:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.873 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:20:26.873 ************************************ 00:20:26.873 END TEST nvmf_multicontroller 00:20:26.873 ************************************ 00:20:26.873 06:38:19 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:26.873 06:38:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:26.873 06:38:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:26.873 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:20:26.873 ************************************ 00:20:26.873 START TEST nvmf_aer 00:20:26.873 ************************************ 00:20:26.873 06:38:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:26.873 * Looking for test storage... 00:20:26.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.873 06:38:19 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.873 06:38:19 -- nvmf/common.sh@7 -- # uname -s 00:20:26.873 06:38:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.873 06:38:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.873 06:38:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.873 06:38:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.873 06:38:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.873 06:38:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.873 06:38:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.873 06:38:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.873 06:38:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.873 06:38:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.873 06:38:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:26.873 06:38:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:26.873 06:38:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.873 06:38:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.873 06:38:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.873 06:38:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.873 06:38:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.873 06:38:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.873 06:38:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.873 06:38:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.873 06:38:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.873 06:38:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.873 06:38:19 -- paths/export.sh@5 -- # export PATH 00:20:26.873 06:38:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.873 06:38:19 -- nvmf/common.sh@46 -- # : 0 00:20:26.873 06:38:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:26.873 06:38:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:26.873 06:38:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:26.873 06:38:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.873 06:38:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.873 06:38:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:26.873 06:38:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:26.873 06:38:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:26.873 06:38:19 -- host/aer.sh@11 -- # nvmftestinit 00:20:26.873 06:38:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:26.873 06:38:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.873 06:38:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:26.873 06:38:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:26.873 06:38:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:26.873 06:38:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.873 06:38:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.873 06:38:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.873 06:38:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:26.873 06:38:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:26.873 06:38:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:26.873 06:38:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:26.873 06:38:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:26.873 06:38:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:26.873 06:38:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.873 06:38:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.873 06:38:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:26.873 06:38:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:26.873 06:38:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.873 06:38:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.873 06:38:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.873 06:38:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.873 06:38:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.873 06:38:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.873 06:38:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.873 06:38:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.873 06:38:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:26.873 06:38:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:26.873 Cannot find device "nvmf_tgt_br" 00:20:26.873 06:38:19 -- nvmf/common.sh@154 -- # true 00:20:26.873 06:38:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.873 Cannot find device "nvmf_tgt_br2" 00:20:26.873 06:38:19 -- nvmf/common.sh@155 -- # true 00:20:26.873 06:38:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:26.873 06:38:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:26.873 Cannot find device "nvmf_tgt_br" 00:20:26.873 06:38:19 -- nvmf/common.sh@157 -- # true 00:20:26.873 06:38:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:26.873 Cannot find device "nvmf_tgt_br2" 00:20:26.873 06:38:19 -- nvmf/common.sh@158 -- # true 00:20:26.873 06:38:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:27.132 06:38:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:27.132 06:38:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.132 06:38:19 -- nvmf/common.sh@161 -- # true 00:20:27.132 06:38:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.132 06:38:19 -- nvmf/common.sh@162 -- # true 00:20:27.132 06:38:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.132 06:38:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.132 06:38:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:27.132 06:38:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:27.132 06:38:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:27.132 06:38:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:27.132 06:38:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:27.132 06:38:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:27.132 06:38:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:27.132 06:38:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:27.132 06:38:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:27.132 06:38:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:27.132 06:38:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:27.132 06:38:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:27.132 06:38:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:27.132 06:38:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:27.132 06:38:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:27.132 06:38:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:27.132 06:38:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:27.132 06:38:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:27.132 06:38:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:27.132 06:38:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:27.132 06:38:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:27.132 06:38:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:27.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:20:27.132 00:20:27.132 --- 10.0.0.2 ping statistics --- 00:20:27.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.132 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:27.132 06:38:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:27.132 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:27.132 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:27.132 00:20:27.132 --- 10.0.0.3 ping statistics --- 00:20:27.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.132 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:27.132 06:38:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:27.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:27.132 00:20:27.132 --- 10.0.0.1 ping statistics --- 00:20:27.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.132 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:27.132 06:38:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.132 06:38:19 -- nvmf/common.sh@421 -- # return 0 00:20:27.132 06:38:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:27.132 06:38:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.132 06:38:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:27.132 06:38:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:27.132 06:38:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.132 06:38:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:27.132 06:38:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:27.391 06:38:19 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:27.391 06:38:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:27.391 06:38:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:27.391 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:20:27.391 06:38:19 -- nvmf/common.sh@469 -- # nvmfpid=92548 00:20:27.391 06:38:19 -- nvmf/common.sh@470 -- # waitforlisten 92548 00:20:27.391 06:38:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:27.391 06:38:19 -- common/autotest_common.sh@819 -- # '[' -z 92548 ']' 00:20:27.391 06:38:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.391 06:38:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:27.391 06:38:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.391 06:38:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:27.391 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:20:27.391 [2024-10-04 06:38:19.886205] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:27.391 [2024-10-04 06:38:19.886299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.391 [2024-10-04 06:38:20.027435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:27.649 [2024-10-04 06:38:20.104015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:27.649 [2024-10-04 06:38:20.104158] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.649 [2024-10-04 06:38:20.104170] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.649 [2024-10-04 06:38:20.104179] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.649 [2024-10-04 06:38:20.104335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.649 [2024-10-04 06:38:20.104715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.649 [2024-10-04 06:38:20.105055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.649 [2024-10-04 06:38:20.105060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.215 06:38:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:28.215 06:38:20 -- common/autotest_common.sh@852 -- # return 0 00:20:28.215 06:38:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:28.215 06:38:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:28.215 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:20:28.473 06:38:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.473 06:38:20 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:28.473 06:38:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.473 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:20:28.473 [2024-10-04 06:38:20.931976] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.473 06:38:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.473 06:38:20 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:28.473 06:38:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.473 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:20:28.473 Malloc0 00:20:28.473 06:38:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.473 06:38:20 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:28.473 06:38:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.473 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:20:28.473 06:38:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.473 06:38:20 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.473 06:38:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.473 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:20:28.473 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.473 06:38:21 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.473 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.473 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.473 [2024-10-04 06:38:21.010515] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.473 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.473 06:38:21 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:28.474 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.474 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.474 [2024-10-04 06:38:21.018189] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:28.474 [ 00:20:28.474 { 00:20:28.474 "allow_any_host": true, 00:20:28.474 "hosts": [], 00:20:28.474 "listen_addresses": [], 00:20:28.474 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:28.474 "subtype": "Discovery" 00:20:28.474 }, 00:20:28.474 { 00:20:28.474 "allow_any_host": true, 00:20:28.474 "hosts": [], 00:20:28.474 "listen_addresses": [ 00:20:28.474 { 00:20:28.474 "adrfam": "IPv4", 00:20:28.474 "traddr": "10.0.0.2", 00:20:28.474 "transport": "TCP", 00:20:28.474 "trsvcid": "4420", 00:20:28.474 "trtype": "TCP" 00:20:28.474 } 00:20:28.474 ], 00:20:28.474 "max_cntlid": 65519, 00:20:28.474 "max_namespaces": 2, 00:20:28.474 "min_cntlid": 1, 00:20:28.474 "model_number": "SPDK bdev Controller", 00:20:28.474 "namespaces": [ 00:20:28.474 { 00:20:28.474 "bdev_name": "Malloc0", 00:20:28.474 "name": "Malloc0", 00:20:28.474 "nguid": "86E243BA87BA4B30979046FBD385C194", 00:20:28.474 "nsid": 1, 00:20:28.474 "uuid": "86e243ba-87ba-4b30-9790-46fbd385c194" 00:20:28.474 } 00:20:28.474 ], 00:20:28.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.474 "serial_number": "SPDK00000000000001", 00:20:28.474 "subtype": "NVMe" 00:20:28.474 } 00:20:28.474 ] 00:20:28.474 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.474 06:38:21 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:28.474 06:38:21 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:28.474 06:38:21 -- host/aer.sh@33 -- # aerpid=92608 00:20:28.474 06:38:21 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:28.474 06:38:21 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:28.474 06:38:21 -- common/autotest_common.sh@1244 -- # local i=0 00:20:28.474 06:38:21 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.474 06:38:21 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:20:28.474 06:38:21 -- common/autotest_common.sh@1247 -- # i=1 00:20:28.474 06:38:21 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:20:28.474 06:38:21 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.474 06:38:21 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:20:28.474 06:38:21 -- common/autotest_common.sh@1247 -- # i=2 00:20:28.474 06:38:21 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:20:28.732 06:38:21 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.732 06:38:21 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.732 06:38:21 -- common/autotest_common.sh@1255 -- # return 0 00:20:28.732 06:38:21 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:28.732 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.732 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.732 Malloc1 00:20:28.732 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.732 06:38:21 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:28.732 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.732 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.732 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.732 06:38:21 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:28.732 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.732 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.732 Asynchronous Event Request test 00:20:28.732 Attaching to 10.0.0.2 00:20:28.732 Attached to 10.0.0.2 00:20:28.732 Registering asynchronous event callbacks... 00:20:28.732 Starting namespace attribute notice tests for all controllers... 00:20:28.732 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:28.732 aer_cb - Changed Namespace 00:20:28.732 Cleaning up... 00:20:28.732 [ 00:20:28.732 { 00:20:28.732 "allow_any_host": true, 00:20:28.732 "hosts": [], 00:20:28.732 "listen_addresses": [], 00:20:28.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:28.732 "subtype": "Discovery" 00:20:28.732 }, 00:20:28.732 { 00:20:28.732 "allow_any_host": true, 00:20:28.732 "hosts": [], 00:20:28.732 "listen_addresses": [ 00:20:28.732 { 00:20:28.732 "adrfam": "IPv4", 00:20:28.732 "traddr": "10.0.0.2", 00:20:28.732 "transport": "TCP", 00:20:28.732 "trsvcid": "4420", 00:20:28.732 "trtype": "TCP" 00:20:28.732 } 00:20:28.732 ], 00:20:28.732 "max_cntlid": 65519, 00:20:28.732 "max_namespaces": 2, 00:20:28.732 "min_cntlid": 1, 00:20:28.732 "model_number": "SPDK bdev Controller", 00:20:28.732 "namespaces": [ 00:20:28.732 { 00:20:28.732 "bdev_name": "Malloc0", 00:20:28.732 "name": "Malloc0", 00:20:28.732 "nguid": "86E243BA87BA4B30979046FBD385C194", 00:20:28.732 "nsid": 1, 00:20:28.732 "uuid": "86e243ba-87ba-4b30-9790-46fbd385c194" 00:20:28.732 }, 00:20:28.732 { 00:20:28.732 "bdev_name": "Malloc1", 00:20:28.732 "name": "Malloc1", 00:20:28.732 "nguid": "1091235C04364649B86B9500977F020F", 00:20:28.732 "nsid": 2, 00:20:28.732 "uuid": "1091235c-0436-4649-b86b-9500977f020f" 00:20:28.732 } 00:20:28.732 ], 00:20:28.732 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.732 "serial_number": "SPDK00000000000001", 00:20:28.732 "subtype": "NVMe" 00:20:28.732 } 00:20:28.732 ] 00:20:28.732 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.732 06:38:21 -- host/aer.sh@43 -- # wait 92608 00:20:28.732 06:38:21 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:28.732 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.732 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.732 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.732 06:38:21 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:28.732 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.732 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.990 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.990 06:38:21 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.990 06:38:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:28.990 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.990 06:38:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:28.990 06:38:21 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:28.990 06:38:21 -- host/aer.sh@51 -- # nvmftestfini 00:20:28.990 06:38:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:28.990 06:38:21 -- nvmf/common.sh@116 -- # sync 00:20:28.990 06:38:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:28.990 06:38:21 -- nvmf/common.sh@119 -- # set +e 00:20:28.990 06:38:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:28.990 06:38:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:28.990 rmmod nvme_tcp 00:20:28.990 rmmod nvme_fabrics 00:20:28.990 rmmod nvme_keyring 00:20:28.990 06:38:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:28.990 06:38:21 -- nvmf/common.sh@123 -- # set -e 00:20:28.990 06:38:21 -- nvmf/common.sh@124 -- # return 0 00:20:28.990 06:38:21 -- nvmf/common.sh@477 -- # '[' -n 92548 ']' 00:20:28.990 06:38:21 -- nvmf/common.sh@478 -- # killprocess 92548 00:20:28.990 06:38:21 -- common/autotest_common.sh@926 -- # '[' -z 92548 ']' 00:20:28.990 06:38:21 -- common/autotest_common.sh@930 -- # kill -0 92548 00:20:28.990 06:38:21 -- common/autotest_common.sh@931 -- # uname 00:20:28.990 06:38:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:28.990 06:38:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92548 00:20:28.990 06:38:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:28.990 06:38:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:28.990 killing process with pid 92548 00:20:28.990 06:38:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92548' 00:20:28.990 06:38:21 -- common/autotest_common.sh@945 -- # kill 92548 00:20:28.990 [2024-10-04 06:38:21.607263] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:28.990 06:38:21 -- common/autotest_common.sh@950 -- # wait 92548 00:20:29.249 06:38:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:29.249 06:38:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:29.249 06:38:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:29.249 06:38:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.249 06:38:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:29.249 06:38:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.249 06:38:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.249 06:38:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.508 06:38:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:29.508 00:20:29.508 real 0m2.576s 00:20:29.508 user 0m7.255s 00:20:29.508 sys 0m0.705s 00:20:29.508 06:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.508 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:29.508 ************************************ 00:20:29.508 END TEST nvmf_aer 00:20:29.508 ************************************ 00:20:29.508 06:38:21 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:29.508 06:38:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:29.508 06:38:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:29.508 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:20:29.508 ************************************ 00:20:29.508 START TEST nvmf_async_init 00:20:29.508 ************************************ 00:20:29.508 06:38:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:29.508 * Looking for test storage... 00:20:29.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:29.508 06:38:22 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.508 06:38:22 -- nvmf/common.sh@7 -- # uname -s 00:20:29.508 06:38:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.508 06:38:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.508 06:38:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.508 06:38:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.508 06:38:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.508 06:38:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.508 06:38:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.508 06:38:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.508 06:38:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.508 06:38:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.508 06:38:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:29.508 06:38:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:29.508 06:38:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.508 06:38:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.508 06:38:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.508 06:38:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.508 06:38:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.508 06:38:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.508 06:38:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.508 06:38:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.508 06:38:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.508 06:38:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.508 06:38:22 -- paths/export.sh@5 -- # export PATH 00:20:29.508 06:38:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.508 06:38:22 -- nvmf/common.sh@46 -- # : 0 00:20:29.508 06:38:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:29.508 06:38:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:29.508 06:38:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:29.508 06:38:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.508 06:38:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.508 06:38:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:29.508 06:38:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:29.508 06:38:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:29.508 06:38:22 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:29.508 06:38:22 -- host/async_init.sh@14 -- # null_block_size=512 00:20:29.508 06:38:22 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:29.508 06:38:22 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:29.508 06:38:22 -- host/async_init.sh@20 -- # uuidgen 00:20:29.508 06:38:22 -- host/async_init.sh@20 -- # tr -d - 00:20:29.508 06:38:22 -- host/async_init.sh@20 -- # nguid=18933cf61f4c42ad80a688d610a8ce4f 00:20:29.508 06:38:22 -- host/async_init.sh@22 -- # nvmftestinit 00:20:29.508 06:38:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:29.508 06:38:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.508 06:38:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:29.508 06:38:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:29.508 06:38:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:29.508 06:38:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.508 06:38:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.508 06:38:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.508 06:38:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:29.508 06:38:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:29.508 06:38:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:29.508 06:38:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:29.508 06:38:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:29.508 06:38:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:29.508 06:38:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.508 06:38:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.508 06:38:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:29.508 06:38:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:29.508 06:38:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.508 06:38:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.508 06:38:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.508 06:38:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.508 06:38:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.508 06:38:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.508 06:38:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.508 06:38:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.508 06:38:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:29.508 06:38:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:29.508 Cannot find device "nvmf_tgt_br" 00:20:29.508 06:38:22 -- nvmf/common.sh@154 -- # true 00:20:29.509 06:38:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.509 Cannot find device "nvmf_tgt_br2" 00:20:29.509 06:38:22 -- nvmf/common.sh@155 -- # true 00:20:29.509 06:38:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:29.509 06:38:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:29.509 Cannot find device "nvmf_tgt_br" 00:20:29.509 06:38:22 -- nvmf/common.sh@157 -- # true 00:20:29.509 06:38:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:29.509 Cannot find device "nvmf_tgt_br2" 00:20:29.509 06:38:22 -- nvmf/common.sh@158 -- # true 00:20:29.509 06:38:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:29.768 06:38:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:29.768 06:38:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.768 06:38:22 -- nvmf/common.sh@161 -- # true 00:20:29.768 06:38:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.768 06:38:22 -- nvmf/common.sh@162 -- # true 00:20:29.768 06:38:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.768 06:38:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.768 06:38:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.768 06:38:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.768 06:38:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.768 06:38:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.768 06:38:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.768 06:38:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:29.768 06:38:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:29.768 06:38:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:29.768 06:38:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:29.768 06:38:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:29.768 06:38:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:29.768 06:38:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.768 06:38:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.768 06:38:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.768 06:38:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:29.768 06:38:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:29.768 06:38:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.768 06:38:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.768 06:38:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.768 06:38:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.768 06:38:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.768 06:38:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:29.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:20:29.768 00:20:29.768 --- 10.0.0.2 ping statistics --- 00:20:29.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.768 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:29.768 06:38:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:29.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:29.768 00:20:29.768 --- 10.0.0.3 ping statistics --- 00:20:29.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.768 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:29.768 06:38:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:29.768 00:20:29.768 --- 10.0.0.1 ping statistics --- 00:20:29.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.768 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:29.768 06:38:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.768 06:38:22 -- nvmf/common.sh@421 -- # return 0 00:20:29.768 06:38:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:29.768 06:38:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.768 06:38:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:29.768 06:38:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:29.768 06:38:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.768 06:38:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:29.768 06:38:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:30.027 06:38:22 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:30.027 06:38:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:30.027 06:38:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:30.027 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:20:30.027 06:38:22 -- nvmf/common.sh@469 -- # nvmfpid=92776 00:20:30.027 06:38:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:30.027 06:38:22 -- nvmf/common.sh@470 -- # waitforlisten 92776 00:20:30.027 06:38:22 -- common/autotest_common.sh@819 -- # '[' -z 92776 ']' 00:20:30.027 06:38:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.027 06:38:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:30.027 06:38:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.027 06:38:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:30.027 06:38:22 -- common/autotest_common.sh@10 -- # set +x 00:20:30.027 [2024-10-04 06:38:22.512915] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:30.027 [2024-10-04 06:38:22.512974] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.027 [2024-10-04 06:38:22.643160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.285 [2024-10-04 06:38:22.722522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.285 [2024-10-04 06:38:22.723031] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.285 [2024-10-04 06:38:22.723186] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.285 [2024-10-04 06:38:22.723307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.285 [2024-10-04 06:38:22.723431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.863 06:38:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:30.863 06:38:23 -- common/autotest_common.sh@852 -- # return 0 00:20:30.863 06:38:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:30.863 06:38:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:30.863 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:30.863 06:38:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.863 06:38:23 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:30.863 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.863 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.135 [2024-10-04 06:38:23.542873] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.135 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.135 06:38:23 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:31.135 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.135 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.135 null0 00:20:31.135 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.135 06:38:23 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:31.135 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.135 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.135 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.135 06:38:23 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:31.135 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.135 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.135 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.135 06:38:23 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 18933cf61f4c42ad80a688d610a8ce4f 00:20:31.135 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.135 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.135 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.135 06:38:23 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:31.135 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.135 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.135 [2024-10-04 06:38:23.582998] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.135 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.135 06:38:23 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:31.135 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.135 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.394 nvme0n1 00:20:31.394 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.394 06:38:23 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:31.394 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.394 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.394 [ 00:20:31.394 { 00:20:31.394 "aliases": [ 00:20:31.394 "18933cf6-1f4c-42ad-80a6-88d610a8ce4f" 00:20:31.394 ], 00:20:31.394 "assigned_rate_limits": { 00:20:31.394 "r_mbytes_per_sec": 0, 00:20:31.394 "rw_ios_per_sec": 0, 00:20:31.394 "rw_mbytes_per_sec": 0, 00:20:31.394 "w_mbytes_per_sec": 0 00:20:31.394 }, 00:20:31.394 "block_size": 512, 00:20:31.394 "claimed": false, 00:20:31.394 "driver_specific": { 00:20:31.394 "mp_policy": "active_passive", 00:20:31.394 "nvme": [ 00:20:31.394 { 00:20:31.394 "ctrlr_data": { 00:20:31.394 "ana_reporting": false, 00:20:31.394 "cntlid": 1, 00:20:31.394 "firmware_revision": "24.01.1", 00:20:31.394 "model_number": "SPDK bdev Controller", 00:20:31.394 "multi_ctrlr": true, 00:20:31.394 "oacs": { 00:20:31.394 "firmware": 0, 00:20:31.394 "format": 0, 00:20:31.394 "ns_manage": 0, 00:20:31.394 "security": 0 00:20:31.394 }, 00:20:31.394 "serial_number": "00000000000000000000", 00:20:31.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.394 "vendor_id": "0x8086" 00:20:31.394 }, 00:20:31.394 "ns_data": { 00:20:31.394 "can_share": true, 00:20:31.394 "id": 1 00:20:31.394 }, 00:20:31.394 "trid": { 00:20:31.394 "adrfam": "IPv4", 00:20:31.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.394 "traddr": "10.0.0.2", 00:20:31.394 "trsvcid": "4420", 00:20:31.394 "trtype": "TCP" 00:20:31.394 }, 00:20:31.394 "vs": { 00:20:31.394 "nvme_version": "1.3" 00:20:31.394 } 00:20:31.394 } 00:20:31.394 ] 00:20:31.394 }, 00:20:31.394 "name": "nvme0n1", 00:20:31.394 "num_blocks": 2097152, 00:20:31.394 "product_name": "NVMe disk", 00:20:31.394 "supported_io_types": { 00:20:31.394 "abort": true, 00:20:31.394 "compare": true, 00:20:31.394 "compare_and_write": true, 00:20:31.394 "flush": true, 00:20:31.394 "nvme_admin": true, 00:20:31.394 "nvme_io": true, 00:20:31.394 "read": true, 00:20:31.394 "reset": true, 00:20:31.394 "unmap": false, 00:20:31.394 "write": true, 00:20:31.394 "write_zeroes": true 00:20:31.394 }, 00:20:31.394 "uuid": "18933cf6-1f4c-42ad-80a6-88d610a8ce4f", 00:20:31.394 "zoned": false 00:20:31.394 } 00:20:31.394 ] 00:20:31.394 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.394 06:38:23 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:31.394 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.394 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 [2024-10-04 06:38:23.838919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:31.395 [2024-10-04 06:38:23.839037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188a1c0 (9): Bad file descriptor 00:20:31.395 [2024-10-04 06:38:23.970941] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:31.395 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.395 06:38:23 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:31.395 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.395 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 [ 00:20:31.395 { 00:20:31.395 "aliases": [ 00:20:31.395 "18933cf6-1f4c-42ad-80a6-88d610a8ce4f" 00:20:31.395 ], 00:20:31.395 "assigned_rate_limits": { 00:20:31.395 "r_mbytes_per_sec": 0, 00:20:31.395 "rw_ios_per_sec": 0, 00:20:31.395 "rw_mbytes_per_sec": 0, 00:20:31.395 "w_mbytes_per_sec": 0 00:20:31.395 }, 00:20:31.395 "block_size": 512, 00:20:31.395 "claimed": false, 00:20:31.395 "driver_specific": { 00:20:31.395 "mp_policy": "active_passive", 00:20:31.395 "nvme": [ 00:20:31.395 { 00:20:31.395 "ctrlr_data": { 00:20:31.395 "ana_reporting": false, 00:20:31.395 "cntlid": 2, 00:20:31.395 "firmware_revision": "24.01.1", 00:20:31.395 "model_number": "SPDK bdev Controller", 00:20:31.395 "multi_ctrlr": true, 00:20:31.395 "oacs": { 00:20:31.395 "firmware": 0, 00:20:31.395 "format": 0, 00:20:31.395 "ns_manage": 0, 00:20:31.395 "security": 0 00:20:31.395 }, 00:20:31.395 "serial_number": "00000000000000000000", 00:20:31.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.395 "vendor_id": "0x8086" 00:20:31.395 }, 00:20:31.395 "ns_data": { 00:20:31.395 "can_share": true, 00:20:31.395 "id": 1 00:20:31.395 }, 00:20:31.395 "trid": { 00:20:31.395 "adrfam": "IPv4", 00:20:31.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.395 "traddr": "10.0.0.2", 00:20:31.395 "trsvcid": "4420", 00:20:31.395 "trtype": "TCP" 00:20:31.395 }, 00:20:31.395 "vs": { 00:20:31.395 "nvme_version": "1.3" 00:20:31.395 } 00:20:31.395 } 00:20:31.395 ] 00:20:31.395 }, 00:20:31.395 "name": "nvme0n1", 00:20:31.395 "num_blocks": 2097152, 00:20:31.395 "product_name": "NVMe disk", 00:20:31.395 "supported_io_types": { 00:20:31.395 "abort": true, 00:20:31.395 "compare": true, 00:20:31.395 "compare_and_write": true, 00:20:31.395 "flush": true, 00:20:31.395 "nvme_admin": true, 00:20:31.395 "nvme_io": true, 00:20:31.395 "read": true, 00:20:31.395 "reset": true, 00:20:31.395 "unmap": false, 00:20:31.395 "write": true, 00:20:31.395 "write_zeroes": true 00:20:31.395 }, 00:20:31.395 "uuid": "18933cf6-1f4c-42ad-80a6-88d610a8ce4f", 00:20:31.395 "zoned": false 00:20:31.395 } 00:20:31.395 ] 00:20:31.395 06:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.395 06:38:23 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.395 06:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.395 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 06:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.395 06:38:24 -- host/async_init.sh@53 -- # mktemp 00:20:31.395 06:38:24 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.labsDchz8u 00:20:31.395 06:38:24 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:31.395 06:38:24 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.labsDchz8u 00:20:31.395 06:38:24 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:31.395 06:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.395 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 06:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.395 06:38:24 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:31.395 06:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.395 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 [2024-10-04 06:38:24.031128] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.395 [2024-10-04 06:38:24.031255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:31.395 06:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.395 06:38:24 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.labsDchz8u 00:20:31.395 06:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.395 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 06:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.395 06:38:24 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.labsDchz8u 00:20:31.395 06:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.395 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:31.395 [2024-10-04 06:38:24.047119] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.668 nvme0n1 00:20:31.668 06:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.668 06:38:24 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:31.668 06:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.668 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:31.668 [ 00:20:31.668 { 00:20:31.668 "aliases": [ 00:20:31.668 "18933cf6-1f4c-42ad-80a6-88d610a8ce4f" 00:20:31.668 ], 00:20:31.669 "assigned_rate_limits": { 00:20:31.669 "r_mbytes_per_sec": 0, 00:20:31.669 "rw_ios_per_sec": 0, 00:20:31.669 "rw_mbytes_per_sec": 0, 00:20:31.669 "w_mbytes_per_sec": 0 00:20:31.669 }, 00:20:31.669 "block_size": 512, 00:20:31.669 "claimed": false, 00:20:31.669 "driver_specific": { 00:20:31.669 "mp_policy": "active_passive", 00:20:31.669 "nvme": [ 00:20:31.669 { 00:20:31.669 "ctrlr_data": { 00:20:31.669 "ana_reporting": false, 00:20:31.669 "cntlid": 3, 00:20:31.669 "firmware_revision": "24.01.1", 00:20:31.669 "model_number": "SPDK bdev Controller", 00:20:31.669 "multi_ctrlr": true, 00:20:31.669 "oacs": { 00:20:31.669 "firmware": 0, 00:20:31.669 "format": 0, 00:20:31.669 "ns_manage": 0, 00:20:31.669 "security": 0 00:20:31.669 }, 00:20:31.669 "serial_number": "00000000000000000000", 00:20:31.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.669 "vendor_id": "0x8086" 00:20:31.669 }, 00:20:31.669 "ns_data": { 00:20:31.669 "can_share": true, 00:20:31.669 "id": 1 00:20:31.669 }, 00:20:31.669 "trid": { 00:20:31.669 "adrfam": "IPv4", 00:20:31.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:31.669 "traddr": "10.0.0.2", 00:20:31.669 "trsvcid": "4421", 00:20:31.669 "trtype": "TCP" 00:20:31.669 }, 00:20:31.669 "vs": { 00:20:31.669 "nvme_version": "1.3" 00:20:31.669 } 00:20:31.669 } 00:20:31.669 ] 00:20:31.669 }, 00:20:31.669 "name": "nvme0n1", 00:20:31.669 "num_blocks": 2097152, 00:20:31.669 "product_name": "NVMe disk", 00:20:31.669 "supported_io_types": { 00:20:31.669 "abort": true, 00:20:31.669 "compare": true, 00:20:31.669 "compare_and_write": true, 00:20:31.669 "flush": true, 00:20:31.669 "nvme_admin": true, 00:20:31.669 "nvme_io": true, 00:20:31.669 "read": true, 00:20:31.669 "reset": true, 00:20:31.669 "unmap": false, 00:20:31.669 "write": true, 00:20:31.669 "write_zeroes": true 00:20:31.669 }, 00:20:31.669 "uuid": "18933cf6-1f4c-42ad-80a6-88d610a8ce4f", 00:20:31.669 "zoned": false 00:20:31.669 } 00:20:31.669 ] 00:20:31.669 06:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.669 06:38:24 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.669 06:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.669 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:31.669 06:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.669 06:38:24 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.labsDchz8u 00:20:31.669 06:38:24 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:31.669 06:38:24 -- host/async_init.sh@78 -- # nvmftestfini 00:20:31.669 06:38:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:31.669 06:38:24 -- nvmf/common.sh@116 -- # sync 00:20:31.669 06:38:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:31.669 06:38:24 -- nvmf/common.sh@119 -- # set +e 00:20:31.669 06:38:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:31.669 06:38:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:31.669 rmmod nvme_tcp 00:20:31.669 rmmod nvme_fabrics 00:20:31.669 rmmod nvme_keyring 00:20:31.669 06:38:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:31.669 06:38:24 -- nvmf/common.sh@123 -- # set -e 00:20:31.669 06:38:24 -- nvmf/common.sh@124 -- # return 0 00:20:31.669 06:38:24 -- nvmf/common.sh@477 -- # '[' -n 92776 ']' 00:20:31.669 06:38:24 -- nvmf/common.sh@478 -- # killprocess 92776 00:20:31.669 06:38:24 -- common/autotest_common.sh@926 -- # '[' -z 92776 ']' 00:20:31.669 06:38:24 -- common/autotest_common.sh@930 -- # kill -0 92776 00:20:31.669 06:38:24 -- common/autotest_common.sh@931 -- # uname 00:20:31.669 06:38:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:31.669 06:38:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92776 00:20:31.669 06:38:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:31.669 06:38:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:31.669 06:38:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92776' 00:20:31.669 killing process with pid 92776 00:20:31.669 06:38:24 -- common/autotest_common.sh@945 -- # kill 92776 00:20:31.669 06:38:24 -- common/autotest_common.sh@950 -- # wait 92776 00:20:31.928 06:38:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:31.928 06:38:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:31.928 06:38:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:31.928 06:38:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.928 06:38:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:31.928 06:38:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.928 06:38:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.928 06:38:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.928 06:38:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:31.928 00:20:31.928 real 0m2.578s 00:20:31.928 user 0m2.335s 00:20:31.928 sys 0m0.674s 00:20:31.928 06:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.928 ************************************ 00:20:31.928 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:31.928 END TEST nvmf_async_init 00:20:31.928 ************************************ 00:20:32.187 06:38:24 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:32.187 06:38:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:32.187 06:38:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:32.187 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:32.187 ************************************ 00:20:32.187 START TEST dma 00:20:32.187 ************************************ 00:20:32.187 06:38:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:32.187 * Looking for test storage... 00:20:32.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:32.187 06:38:24 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.187 06:38:24 -- nvmf/common.sh@7 -- # uname -s 00:20:32.187 06:38:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.187 06:38:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.187 06:38:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.187 06:38:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.187 06:38:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.187 06:38:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.187 06:38:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.187 06:38:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.187 06:38:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.187 06:38:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.187 06:38:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:32.187 06:38:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:32.187 06:38:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.187 06:38:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.187 06:38:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.187 06:38:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.187 06:38:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.187 06:38:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.187 06:38:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.187 06:38:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.187 06:38:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.187 06:38:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.187 06:38:24 -- paths/export.sh@5 -- # export PATH 00:20:32.187 06:38:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.187 06:38:24 -- nvmf/common.sh@46 -- # : 0 00:20:32.187 06:38:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:32.187 06:38:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:32.187 06:38:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:32.187 06:38:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.187 06:38:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.187 06:38:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:32.187 06:38:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:32.187 06:38:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:32.187 06:38:24 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:32.187 06:38:24 -- host/dma.sh@13 -- # exit 0 00:20:32.187 00:20:32.187 real 0m0.100s 00:20:32.187 user 0m0.045s 00:20:32.188 sys 0m0.060s 00:20:32.188 06:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.188 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:32.188 ************************************ 00:20:32.188 END TEST dma 00:20:32.188 ************************************ 00:20:32.188 06:38:24 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:32.188 06:38:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:32.188 06:38:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:32.188 06:38:24 -- common/autotest_common.sh@10 -- # set +x 00:20:32.188 ************************************ 00:20:32.188 START TEST nvmf_identify 00:20:32.188 ************************************ 00:20:32.188 06:38:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:32.188 * Looking for test storage... 00:20:32.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:32.188 06:38:24 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.188 06:38:24 -- nvmf/common.sh@7 -- # uname -s 00:20:32.188 06:38:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.188 06:38:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.188 06:38:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.188 06:38:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.188 06:38:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.188 06:38:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.188 06:38:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.188 06:38:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.188 06:38:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.447 06:38:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.447 06:38:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:32.447 06:38:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:32.447 06:38:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.447 06:38:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.447 06:38:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.447 06:38:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.447 06:38:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.447 06:38:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.447 06:38:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.447 06:38:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.447 06:38:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.447 06:38:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.447 06:38:24 -- paths/export.sh@5 -- # export PATH 00:20:32.447 06:38:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.447 06:38:24 -- nvmf/common.sh@46 -- # : 0 00:20:32.447 06:38:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:32.447 06:38:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:32.447 06:38:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:32.447 06:38:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.447 06:38:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.447 06:38:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:32.447 06:38:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:32.447 06:38:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:32.448 06:38:24 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:32.448 06:38:24 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:32.448 06:38:24 -- host/identify.sh@14 -- # nvmftestinit 00:20:32.448 06:38:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:32.448 06:38:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.448 06:38:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:32.448 06:38:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:32.448 06:38:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:32.448 06:38:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.448 06:38:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.448 06:38:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.448 06:38:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:32.448 06:38:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:32.448 06:38:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:32.448 06:38:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:32.448 06:38:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:32.448 06:38:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:32.448 06:38:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.448 06:38:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.448 06:38:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:32.448 06:38:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:32.448 06:38:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:32.448 06:38:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:32.448 06:38:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:32.448 06:38:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.448 06:38:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:32.448 06:38:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:32.448 06:38:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:32.448 06:38:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:32.448 06:38:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:32.448 06:38:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:32.448 Cannot find device "nvmf_tgt_br" 00:20:32.448 06:38:24 -- nvmf/common.sh@154 -- # true 00:20:32.448 06:38:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.448 Cannot find device "nvmf_tgt_br2" 00:20:32.448 06:38:24 -- nvmf/common.sh@155 -- # true 00:20:32.448 06:38:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:32.448 06:38:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:32.448 Cannot find device "nvmf_tgt_br" 00:20:32.448 06:38:24 -- nvmf/common.sh@157 -- # true 00:20:32.448 06:38:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:32.448 Cannot find device "nvmf_tgt_br2" 00:20:32.448 06:38:24 -- nvmf/common.sh@158 -- # true 00:20:32.448 06:38:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:32.448 06:38:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:32.448 06:38:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:32.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.448 06:38:25 -- nvmf/common.sh@161 -- # true 00:20:32.448 06:38:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:32.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.448 06:38:25 -- nvmf/common.sh@162 -- # true 00:20:32.448 06:38:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:32.448 06:38:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:32.448 06:38:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:32.448 06:38:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:32.448 06:38:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:32.448 06:38:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:32.448 06:38:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:32.448 06:38:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:32.448 06:38:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:32.448 06:38:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:32.448 06:38:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:32.448 06:38:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:32.448 06:38:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:32.448 06:38:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:32.448 06:38:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:32.707 06:38:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:32.707 06:38:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:32.707 06:38:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:32.707 06:38:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:32.707 06:38:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:32.707 06:38:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:32.707 06:38:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:32.707 06:38:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:32.707 06:38:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:32.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:20:32.707 00:20:32.707 --- 10.0.0.2 ping statistics --- 00:20:32.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.707 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:32.707 06:38:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:32.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:32.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:32.707 00:20:32.707 --- 10.0.0.3 ping statistics --- 00:20:32.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.707 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:32.707 06:38:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:32.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:32.707 00:20:32.707 --- 10.0.0.1 ping statistics --- 00:20:32.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.707 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:32.707 06:38:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.707 06:38:25 -- nvmf/common.sh@421 -- # return 0 00:20:32.707 06:38:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:32.707 06:38:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.707 06:38:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:32.707 06:38:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:32.707 06:38:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.707 06:38:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:32.707 06:38:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:32.707 06:38:25 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:32.707 06:38:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:32.707 06:38:25 -- common/autotest_common.sh@10 -- # set +x 00:20:32.707 06:38:25 -- host/identify.sh@19 -- # nvmfpid=93044 00:20:32.707 06:38:25 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:32.707 06:38:25 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.707 06:38:25 -- host/identify.sh@23 -- # waitforlisten 93044 00:20:32.707 06:38:25 -- common/autotest_common.sh@819 -- # '[' -z 93044 ']' 00:20:32.707 06:38:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.707 06:38:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:32.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.707 06:38:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.707 06:38:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:32.707 06:38:25 -- common/autotest_common.sh@10 -- # set +x 00:20:32.707 [2024-10-04 06:38:25.299162] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:32.707 [2024-10-04 06:38:25.299246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.967 [2024-10-04 06:38:25.440894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.967 [2024-10-04 06:38:25.522397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:32.967 [2024-10-04 06:38:25.522604] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.967 [2024-10-04 06:38:25.522622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.967 [2024-10-04 06:38:25.522634] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.967 [2024-10-04 06:38:25.522808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.967 [2024-10-04 06:38:25.522925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.967 [2024-10-04 06:38:25.523378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.967 [2024-10-04 06:38:25.523422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.903 06:38:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.903 06:38:26 -- common/autotest_common.sh@852 -- # return 0 00:20:33.903 06:38:26 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.903 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.903 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.903 [2024-10-04 06:38:26.288362] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.903 06:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.903 06:38:26 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:33.903 06:38:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:33.903 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.903 06:38:26 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:33.903 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.903 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.903 Malloc0 00:20:33.903 06:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.904 06:38:26 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.904 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.904 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.904 06:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.904 06:38:26 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:33.904 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.904 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.904 06:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.904 06:38:26 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.904 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.904 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.904 [2024-10-04 06:38:26.406018] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.904 06:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.904 06:38:26 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.904 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.904 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.904 06:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.904 06:38:26 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:33.904 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:33.904 06:38:26 -- common/autotest_common.sh@10 -- # set +x 00:20:33.904 [2024-10-04 06:38:26.421743] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:33.904 [ 00:20:33.904 { 00:20:33.904 "allow_any_host": true, 00:20:33.904 "hosts": [], 00:20:33.904 "listen_addresses": [ 00:20:33.904 { 00:20:33.904 "adrfam": "IPv4", 00:20:33.904 "traddr": "10.0.0.2", 00:20:33.904 "transport": "TCP", 00:20:33.904 "trsvcid": "4420", 00:20:33.904 "trtype": "TCP" 00:20:33.904 } 00:20:33.904 ], 00:20:33.904 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:33.904 "subtype": "Discovery" 00:20:33.904 }, 00:20:33.904 { 00:20:33.904 "allow_any_host": true, 00:20:33.904 "hosts": [], 00:20:33.904 "listen_addresses": [ 00:20:33.904 { 00:20:33.904 "adrfam": "IPv4", 00:20:33.904 "traddr": "10.0.0.2", 00:20:33.904 "transport": "TCP", 00:20:33.904 "trsvcid": "4420", 00:20:33.904 "trtype": "TCP" 00:20:33.904 } 00:20:33.904 ], 00:20:33.904 "max_cntlid": 65519, 00:20:33.904 "max_namespaces": 32, 00:20:33.904 "min_cntlid": 1, 00:20:33.904 "model_number": "SPDK bdev Controller", 00:20:33.904 "namespaces": [ 00:20:33.904 { 00:20:33.904 "bdev_name": "Malloc0", 00:20:33.904 "eui64": "ABCDEF0123456789", 00:20:33.904 "name": "Malloc0", 00:20:33.904 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:33.904 "nsid": 1, 00:20:33.904 "uuid": "dfbd2966-6dfa-414b-a38b-b79780e3ff8e" 00:20:33.904 } 00:20:33.904 ], 00:20:33.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.904 "serial_number": "SPDK00000000000001", 00:20:33.904 "subtype": "NVMe" 00:20:33.904 } 00:20:33.904 ] 00:20:33.904 06:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:33.904 06:38:26 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:33.904 [2024-10-04 06:38:26.455101] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:33.904 [2024-10-04 06:38:26.455160] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93097 ] 00:20:34.166 [2024-10-04 06:38:26.593438] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:34.166 [2024-10-04 06:38:26.593503] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:34.166 [2024-10-04 06:38:26.593509] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:34.166 [2024-10-04 06:38:26.593518] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:34.166 [2024-10-04 06:38:26.593528] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:34.166 [2024-10-04 06:38:26.593697] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:34.166 [2024-10-04 06:38:26.593766] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21be540 0 00:20:34.166 [2024-10-04 06:38:26.607837] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:34.166 [2024-10-04 06:38:26.607863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:34.166 [2024-10-04 06:38:26.607885] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:34.166 [2024-10-04 06:38:26.607888] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:34.166 [2024-10-04 06:38:26.607939] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.166 [2024-10-04 06:38:26.607948] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.166 [2024-10-04 06:38:26.607952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.166 [2024-10-04 06:38:26.607978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:34.166 [2024-10-04 06:38:26.608009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.166 [2024-10-04 06:38:26.615844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.166 [2024-10-04 06:38:26.615867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.166 [2024-10-04 06:38:26.615887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.615892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.615904] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:34.167 [2024-10-04 06:38:26.615911] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:34.167 [2024-10-04 06:38:26.615916] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:34.167 [2024-10-04 06:38:26.615933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.615937] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.615941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.167 [2024-10-04 06:38:26.615949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.167 [2024-10-04 06:38:26.615978] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.167 [2024-10-04 06:38:26.616088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.167 [2024-10-04 06:38:26.616095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.167 [2024-10-04 06:38:26.616098] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616102] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.616108] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:34.167 [2024-10-04 06:38:26.616115] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:34.167 [2024-10-04 06:38:26.616122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616144] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.167 [2024-10-04 06:38:26.616151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.167 [2024-10-04 06:38:26.616179] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.167 [2024-10-04 06:38:26.616265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.167 [2024-10-04 06:38:26.616272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.167 [2024-10-04 06:38:26.616275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.616285] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:34.167 [2024-10-04 06:38:26.616293] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:34.167 [2024-10-04 06:38:26.616300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616303] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616307] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.167 [2024-10-04 06:38:26.616313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.167 [2024-10-04 06:38:26.616331] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.167 [2024-10-04 06:38:26.616405] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.167 [2024-10-04 06:38:26.616416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.167 [2024-10-04 06:38:26.616420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.616431] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:34.167 [2024-10-04 06:38:26.616441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.167 [2024-10-04 06:38:26.616455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.167 [2024-10-04 06:38:26.616474] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.167 [2024-10-04 06:38:26.616544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.167 [2024-10-04 06:38:26.616558] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.167 [2024-10-04 06:38:26.616563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.616572] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:34.167 [2024-10-04 06:38:26.616577] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:34.167 [2024-10-04 06:38:26.616585] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:34.167 [2024-10-04 06:38:26.616691] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:34.167 [2024-10-04 06:38:26.616701] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:34.167 [2024-10-04 06:38:26.616711] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.167 [2024-10-04 06:38:26.616725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.167 [2024-10-04 06:38:26.616744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.167 [2024-10-04 06:38:26.616805] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.167 [2024-10-04 06:38:26.616825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.167 [2024-10-04 06:38:26.616831] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.616841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:34.167 [2024-10-04 06:38:26.616850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.167 [2024-10-04 06:38:26.616865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.167 [2024-10-04 06:38:26.616884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.167 [2024-10-04 06:38:26.616976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.167 [2024-10-04 06:38:26.616982] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.167 [2024-10-04 06:38:26.616985] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.616989] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.616994] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:34.167 [2024-10-04 06:38:26.616999] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:34.167 [2024-10-04 06:38:26.617021] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:34.167 [2024-10-04 06:38:26.617038] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:34.167 [2024-10-04 06:38:26.617047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.617052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.617055] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.167 [2024-10-04 06:38:26.617062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.167 [2024-10-04 06:38:26.617081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.167 [2024-10-04 06:38:26.617235] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.167 [2024-10-04 06:38:26.617242] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.167 [2024-10-04 06:38:26.617245] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.617249] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21be540): datao=0, datal=4096, cccid=0 00:20:34.167 [2024-10-04 06:38:26.617254] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21f7220) on tqpair(0x21be540): expected_datao=0, payload_size=4096 00:20:34.167 [2024-10-04 06:38:26.617263] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.617268] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.617289] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.167 [2024-10-04 06:38:26.617295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.167 [2024-10-04 06:38:26.617298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.167 [2024-10-04 06:38:26.617302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.167 [2024-10-04 06:38:26.617312] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:34.167 [2024-10-04 06:38:26.617317] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:34.167 [2024-10-04 06:38:26.617321] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:34.167 [2024-10-04 06:38:26.617327] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:34.167 [2024-10-04 06:38:26.617331] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:34.167 [2024-10-04 06:38:26.617336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:34.167 [2024-10-04 06:38:26.617348] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:34.168 [2024-10-04 06:38:26.617356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.617370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:34.168 [2024-10-04 06:38:26.617390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.168 [2024-10-04 06:38:26.617467] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.168 [2024-10-04 06:38:26.617474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.168 [2024-10-04 06:38:26.617477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7220) on tqpair=0x21be540 00:20:34.168 [2024-10-04 06:38:26.617489] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.617502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.168 [2024-10-04 06:38:26.617508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617511] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.617520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.168 [2024-10-04 06:38:26.617525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617532] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.617537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.168 [2024-10-04 06:38:26.617543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617546] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.617555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.168 [2024-10-04 06:38:26.617559] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:34.168 [2024-10-04 06:38:26.617572] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:34.168 [2024-10-04 06:38:26.617579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.617593] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.168 [2024-10-04 06:38:26.617613] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7220, cid 0, qid 0 00:20:34.168 [2024-10-04 06:38:26.617620] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7380, cid 1, qid 0 00:20:34.168 [2024-10-04 06:38:26.617624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f74e0, cid 2, qid 0 00:20:34.168 [2024-10-04 06:38:26.617629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.168 [2024-10-04 06:38:26.617633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f77a0, cid 4, qid 0 00:20:34.168 [2024-10-04 06:38:26.617751] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.168 [2024-10-04 06:38:26.617757] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.168 [2024-10-04 06:38:26.617761] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f77a0) on tqpair=0x21be540 00:20:34.168 [2024-10-04 06:38:26.617771] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:34.168 [2024-10-04 06:38:26.617776] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:34.168 [2024-10-04 06:38:26.617801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617816] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.617823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.168 [2024-10-04 06:38:26.617865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f77a0, cid 4, qid 0 00:20:34.168 [2024-10-04 06:38:26.617973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.168 [2024-10-04 06:38:26.617979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.168 [2024-10-04 06:38:26.617983] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.617986] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21be540): datao=0, datal=4096, cccid=4 00:20:34.168 [2024-10-04 06:38:26.617991] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21f77a0) on tqpair(0x21be540): expected_datao=0, payload_size=4096 00:20:34.168 [2024-10-04 06:38:26.617998] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618002] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618021] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.168 [2024-10-04 06:38:26.618027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.168 [2024-10-04 06:38:26.618030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618034] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f77a0) on tqpair=0x21be540 00:20:34.168 [2024-10-04 06:38:26.618048] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:34.168 [2024-10-04 06:38:26.618098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618107] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.618114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.168 [2024-10-04 06:38:26.618121] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.618134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.168 [2024-10-04 06:38:26.618159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f77a0, cid 4, qid 0 00:20:34.168 [2024-10-04 06:38:26.618166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7900, cid 5, qid 0 00:20:34.168 [2024-10-04 06:38:26.618327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.168 [2024-10-04 06:38:26.618343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.168 [2024-10-04 06:38:26.618348] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618351] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21be540): datao=0, datal=1024, cccid=4 00:20:34.168 [2024-10-04 06:38:26.618356] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21f77a0) on tqpair(0x21be540): expected_datao=0, payload_size=1024 00:20:34.168 [2024-10-04 06:38:26.618363] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618367] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.168 [2024-10-04 06:38:26.618378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.168 [2024-10-04 06:38:26.618381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.618385] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7900) on tqpair=0x21be540 00:20:34.168 [2024-10-04 06:38:26.663867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.168 [2024-10-04 06:38:26.663888] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.168 [2024-10-04 06:38:26.663908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.663912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f77a0) on tqpair=0x21be540 00:20:34.168 [2024-10-04 06:38:26.663925] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.663930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.663933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21be540) 00:20:34.168 [2024-10-04 06:38:26.663941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.168 [2024-10-04 06:38:26.663982] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f77a0, cid 4, qid 0 00:20:34.168 [2024-10-04 06:38:26.664078] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.168 [2024-10-04 06:38:26.664084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.168 [2024-10-04 06:38:26.664088] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.664091] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21be540): datao=0, datal=3072, cccid=4 00:20:34.168 [2024-10-04 06:38:26.664096] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21f77a0) on tqpair(0x21be540): expected_datao=0, payload_size=3072 00:20:34.168 [2024-10-04 06:38:26.664103] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.664107] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.664115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.168 [2024-10-04 06:38:26.664120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.168 [2024-10-04 06:38:26.664124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.664127] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f77a0) on tqpair=0x21be540 00:20:34.168 [2024-10-04 06:38:26.664137] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.664141] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.168 [2024-10-04 06:38:26.664160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21be540) 00:20:34.169 [2024-10-04 06:38:26.664167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.169 [2024-10-04 06:38:26.664207] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f77a0, cid 4, qid 0 00:20:34.169 [2024-10-04 06:38:26.664290] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.169 [2024-10-04 06:38:26.664296] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.169 [2024-10-04 06:38:26.664300] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.169 [2024-10-04 06:38:26.664303] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21be540): datao=0, datal=8, cccid=4 00:20:34.169 [2024-10-04 06:38:26.664308] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21f77a0) on tqpair(0x21be540): expected_datao=0, payload_size=8 00:20:34.169 [2024-10-04 06:38:26.664315] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.169 [2024-10-04 06:38:26.664319] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.169 ===================================================== 00:20:34.169 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:34.169 ===================================================== 00:20:34.169 Controller Capabilities/Features 00:20:34.169 ================================ 00:20:34.169 Vendor ID: 0000 00:20:34.169 Subsystem Vendor ID: 0000 00:20:34.169 Serial Number: .................... 00:20:34.169 Model Number: ........................................ 00:20:34.169 Firmware Version: 24.01.1 00:20:34.169 Recommended Arb Burst: 0 00:20:34.169 IEEE OUI Identifier: 00 00 00 00:20:34.169 Multi-path I/O 00:20:34.169 May have multiple subsystem ports: No 00:20:34.169 May have multiple controllers: No 00:20:34.169 Associated with SR-IOV VF: No 00:20:34.169 Max Data Transfer Size: 131072 00:20:34.169 Max Number of Namespaces: 0 00:20:34.169 Max Number of I/O Queues: 1024 00:20:34.169 NVMe Specification Version (VS): 1.3 00:20:34.169 NVMe Specification Version (Identify): 1.3 00:20:34.169 Maximum Queue Entries: 128 00:20:34.169 Contiguous Queues Required: Yes 00:20:34.169 Arbitration Mechanisms Supported 00:20:34.169 Weighted Round Robin: Not Supported 00:20:34.169 Vendor Specific: Not Supported 00:20:34.169 Reset Timeout: 15000 ms 00:20:34.169 Doorbell Stride: 4 bytes 00:20:34.169 NVM Subsystem Reset: Not Supported 00:20:34.169 Command Sets Supported 00:20:34.169 NVM Command Set: Supported 00:20:34.169 Boot Partition: Not Supported 00:20:34.169 Memory Page Size Minimum: 4096 bytes 00:20:34.169 Memory Page Size Maximum: 4096 bytes 00:20:34.169 Persistent Memory Region: Not Supported 00:20:34.169 Optional Asynchronous Events Supported 00:20:34.169 Namespace Attribute Notices: Not Supported 00:20:34.169 Firmware Activation Notices: Not Supported 00:20:34.169 ANA Change Notices: Not Supported 00:20:34.169 PLE Aggregate Log Change Notices: Not Supported 00:20:34.169 LBA Status Info Alert Notices: Not Supported 00:20:34.169 EGE Aggregate Log Change Notices: Not Supported 00:20:34.169 Normal NVM Subsystem Shutdown event: Not Supported 00:20:34.169 Zone Descriptor Change Notices: Not Supported 00:20:34.169 Discovery Log Change Notices: Supported 00:20:34.169 Controller Attributes 00:20:34.169 128-bit Host Identifier: Not Supported 00:20:34.169 Non-Operational Permissive Mode: Not Supported 00:20:34.169 NVM Sets: Not Supported 00:20:34.169 Read Recovery Levels: Not Supported 00:20:34.169 Endurance Groups: Not Supported 00:20:34.169 Predictable Latency Mode: Not Supported 00:20:34.169 Traffic Based Keep ALive: Not Supported 00:20:34.169 Namespace Granularity: Not Supported 00:20:34.169 SQ Associations: Not Supported 00:20:34.169 UUID List: Not Supported 00:20:34.169 Multi-Domain Subsystem: Not Supported 00:20:34.169 Fixed Capacity Management: Not Supported 00:20:34.169 Variable Capacity Management: Not Supported 00:20:34.169 Delete Endurance Group: Not Supported 00:20:34.169 Delete NVM Set: Not Supported 00:20:34.169 Extended LBA Formats Supported: Not Supported 00:20:34.169 Flexible Data Placement Supported: Not Supported 00:20:34.169 00:20:34.169 Controller Memory Buffer Support 00:20:34.169 ================================ 00:20:34.169 Supported: No 00:20:34.169 00:20:34.169 Persistent Memory Region Support 00:20:34.169 ================================ 00:20:34.169 Supported: No 00:20:34.169 00:20:34.169 Admin Command Set Attributes 00:20:34.169 ============================ 00:20:34.169 Security Send/Receive: Not Supported 00:20:34.169 Format NVM: Not Supported 00:20:34.169 Firmware Activate/Download: Not Supported 00:20:34.169 Namespace Management: Not Supported 00:20:34.169 Device Self-Test: Not Supported 00:20:34.169 Directives: Not Supported 00:20:34.169 NVMe-MI: Not Supported 00:20:34.169 Virtualization Management: Not Supported 00:20:34.169 Doorbell Buffer Config: Not Supported 00:20:34.169 Get LBA Status Capability: Not Supported 00:20:34.169 Command & Feature Lockdown Capability: Not Supported 00:20:34.169 Abort Command Limit: 1 00:20:34.169 Async Event Request Limit: 4 00:20:34.169 Number of Firmware Slots: N/A 00:20:34.169 Firmware Slot 1 Read-Only: N/A 00:20:34.169 Fi[2024-10-04 06:38:26.704896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.169 [2024-10-04 06:38:26.704935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.169 [2024-10-04 06:38:26.704940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.169 [2024-10-04 06:38:26.704944] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f77a0) on tqpair=0x21be540 00:20:34.169 rmware Activation Without Reset: N/A 00:20:34.169 Multiple Update Detection Support: N/A 00:20:34.169 Firmware Update Granularity: No Information Provided 00:20:34.169 Per-Namespace SMART Log: No 00:20:34.169 Asymmetric Namespace Access Log Page: Not Supported 00:20:34.169 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:34.169 Command Effects Log Page: Not Supported 00:20:34.169 Get Log Page Extended Data: Supported 00:20:34.169 Telemetry Log Pages: Not Supported 00:20:34.169 Persistent Event Log Pages: Not Supported 00:20:34.169 Supported Log Pages Log Page: May Support 00:20:34.169 Commands Supported & Effects Log Page: Not Supported 00:20:34.169 Feature Identifiers & Effects Log Page:May Support 00:20:34.169 NVMe-MI Commands & Effects Log Page: May Support 00:20:34.169 Data Area 4 for Telemetry Log: Not Supported 00:20:34.169 Error Log Page Entries Supported: 128 00:20:34.169 Keep Alive: Not Supported 00:20:34.169 00:20:34.169 NVM Command Set Attributes 00:20:34.169 ========================== 00:20:34.169 Submission Queue Entry Size 00:20:34.169 Max: 1 00:20:34.169 Min: 1 00:20:34.169 Completion Queue Entry Size 00:20:34.169 Max: 1 00:20:34.169 Min: 1 00:20:34.169 Number of Namespaces: 0 00:20:34.169 Compare Command: Not Supported 00:20:34.169 Write Uncorrectable Command: Not Supported 00:20:34.169 Dataset Management Command: Not Supported 00:20:34.169 Write Zeroes Command: Not Supported 00:20:34.169 Set Features Save Field: Not Supported 00:20:34.169 Reservations: Not Supported 00:20:34.169 Timestamp: Not Supported 00:20:34.169 Copy: Not Supported 00:20:34.169 Volatile Write Cache: Not Present 00:20:34.169 Atomic Write Unit (Normal): 1 00:20:34.169 Atomic Write Unit (PFail): 1 00:20:34.169 Atomic Compare & Write Unit: 1 00:20:34.169 Fused Compare & Write: Supported 00:20:34.169 Scatter-Gather List 00:20:34.169 SGL Command Set: Supported 00:20:34.169 SGL Keyed: Supported 00:20:34.169 SGL Bit Bucket Descriptor: Not Supported 00:20:34.169 SGL Metadata Pointer: Not Supported 00:20:34.169 Oversized SGL: Not Supported 00:20:34.169 SGL Metadata Address: Not Supported 00:20:34.169 SGL Offset: Supported 00:20:34.169 Transport SGL Data Block: Not Supported 00:20:34.169 Replay Protected Memory Block: Not Supported 00:20:34.169 00:20:34.169 Firmware Slot Information 00:20:34.169 ========================= 00:20:34.169 Active slot: 0 00:20:34.169 00:20:34.169 00:20:34.169 Error Log 00:20:34.169 ========= 00:20:34.169 00:20:34.169 Active Namespaces 00:20:34.169 ================= 00:20:34.169 Discovery Log Page 00:20:34.169 ================== 00:20:34.169 Generation Counter: 2 00:20:34.169 Number of Records: 2 00:20:34.169 Record Format: 0 00:20:34.169 00:20:34.169 Discovery Log Entry 0 00:20:34.169 ---------------------- 00:20:34.169 Transport Type: 3 (TCP) 00:20:34.169 Address Family: 1 (IPv4) 00:20:34.169 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:34.169 Entry Flags: 00:20:34.169 Duplicate Returned Information: 1 00:20:34.169 Explicit Persistent Connection Support for Discovery: 1 00:20:34.169 Transport Requirements: 00:20:34.169 Secure Channel: Not Required 00:20:34.169 Port ID: 0 (0x0000) 00:20:34.169 Controller ID: 65535 (0xffff) 00:20:34.169 Admin Max SQ Size: 128 00:20:34.169 Transport Service Identifier: 4420 00:20:34.169 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:34.169 Transport Address: 10.0.0.2 00:20:34.169 Discovery Log Entry 1 00:20:34.169 ---------------------- 00:20:34.169 Transport Type: 3 (TCP) 00:20:34.169 Address Family: 1 (IPv4) 00:20:34.170 Subsystem Type: 2 (NVM Subsystem) 00:20:34.170 Entry Flags: 00:20:34.170 Duplicate Returned Information: 0 00:20:34.170 Explicit Persistent Connection Support for Discovery: 0 00:20:34.170 Transport Requirements: 00:20:34.170 Secure Channel: Not Required 00:20:34.170 Port ID: 0 (0x0000) 00:20:34.170 Controller ID: 65535 (0xffff) 00:20:34.170 Admin Max SQ Size: 128 00:20:34.170 Transport Service Identifier: 4420 00:20:34.170 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:34.170 Transport Address: 10.0.0.2 [2024-10-04 06:38:26.705079] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:34.170 [2024-10-04 06:38:26.705098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.170 [2024-10-04 06:38:26.705106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.170 [2024-10-04 06:38:26.705111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.170 [2024-10-04 06:38:26.705117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.170 [2024-10-04 06:38:26.705126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.705142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.705168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.705246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.705253] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.705256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.705268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.705283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.705321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.705408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.705415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.705419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.705429] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:34.170 [2024-10-04 06:38:26.705434] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:34.170 [2024-10-04 06:38:26.705443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.705458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.705476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.705542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.705548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.705552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705555] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.705567] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.705582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.705600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.705677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.705683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.705687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.705701] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705705] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705708] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.705715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.705735] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.705803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.705809] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.705812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.705841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.705849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.705855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.705917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.706001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.706008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.706012] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.706026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.706041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.706059] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.706132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.706138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.706141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706145] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.706155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.706170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.706188] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.706252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.706259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.706262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.706276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706284] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.706290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.706308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.706370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.706377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.706380] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.706394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.170 [2024-10-04 06:38:26.706408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.170 [2024-10-04 06:38:26.706438] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.170 [2024-10-04 06:38:26.706501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.170 [2024-10-04 06:38:26.706507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.170 [2024-10-04 06:38:26.706510] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706514] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.170 [2024-10-04 06:38:26.706523] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.170 [2024-10-04 06:38:26.706527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.706537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.706554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.706631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.706638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.706641] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706645] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.706655] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706660] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.706670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.706687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.706765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.706772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.706775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.706789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706793] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706796] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.706802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.706820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.706888] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.706896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.706899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.706913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.706921] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.706928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.706947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.707077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.707084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.707087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.707101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707109] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.707115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.707134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.707220] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.707226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.707230] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707233] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.707244] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707248] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707251] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.707258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.707276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.707352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.707358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.707361] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.707375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707383] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.707390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.707418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.707509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.707515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.707519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707523] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.707533] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.707548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.707566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.707643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.707649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.707652] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707656] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.707666] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.707681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.707699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.707753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.707759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.707763] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.707776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.707784] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.171 [2024-10-04 06:38:26.707791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.171 [2024-10-04 06:38:26.707809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.171 [2024-10-04 06:38:26.711864] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.171 [2024-10-04 06:38:26.711884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.171 [2024-10-04 06:38:26.711889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.711893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.171 [2024-10-04 06:38:26.711906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.171 [2024-10-04 06:38:26.711911] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.172 [2024-10-04 06:38:26.711915] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21be540) 00:20:34.172 [2024-10-04 06:38:26.711922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.172 [2024-10-04 06:38:26.711946] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21f7640, cid 3, qid 0 00:20:34.172 [2024-10-04 06:38:26.712024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.172 [2024-10-04 06:38:26.712046] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.172 [2024-10-04 06:38:26.712050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.172 [2024-10-04 06:38:26.712053] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21f7640) on tqpair=0x21be540 00:20:34.172 [2024-10-04 06:38:26.712062] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:34.172 00:20:34.172 06:38:26 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:34.172 [2024-10-04 06:38:26.748247] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:34.172 [2024-10-04 06:38:26.748315] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93100 ] 00:20:34.435 [2024-10-04 06:38:26.884810] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:34.435 [2024-10-04 06:38:26.884890] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:34.435 [2024-10-04 06:38:26.884896] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:34.435 [2024-10-04 06:38:26.884905] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:34.435 [2024-10-04 06:38:26.884914] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:34.435 [2024-10-04 06:38:26.885014] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:34.435 [2024-10-04 06:38:26.885058] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2441540 0 00:20:34.435 [2024-10-04 06:38:26.899842] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:34.435 [2024-10-04 06:38:26.899865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:34.435 [2024-10-04 06:38:26.899886] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:34.435 [2024-10-04 06:38:26.899890] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:34.435 [2024-10-04 06:38:26.899933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.899939] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.899943] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.435 [2024-10-04 06:38:26.899953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:34.435 [2024-10-04 06:38:26.899980] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.435 [2024-10-04 06:38:26.907834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.435 [2024-10-04 06:38:26.907857] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.435 [2024-10-04 06:38:26.907878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.907882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.435 [2024-10-04 06:38:26.907895] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:34.435 [2024-10-04 06:38:26.907903] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:34.435 [2024-10-04 06:38:26.907908] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:34.435 [2024-10-04 06:38:26.907922] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.907926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.907930] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.435 [2024-10-04 06:38:26.907938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.435 [2024-10-04 06:38:26.907964] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.435 [2024-10-04 06:38:26.908047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.435 [2024-10-04 06:38:26.908053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.435 [2024-10-04 06:38:26.908057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908060] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.435 [2024-10-04 06:38:26.908066] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:34.435 [2024-10-04 06:38:26.908073] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:34.435 [2024-10-04 06:38:26.908080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908087] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.435 [2024-10-04 06:38:26.908093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.435 [2024-10-04 06:38:26.908126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.435 [2024-10-04 06:38:26.908204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.435 [2024-10-04 06:38:26.908210] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.435 [2024-10-04 06:38:26.908213] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908217] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.435 [2024-10-04 06:38:26.908223] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:34.435 [2024-10-04 06:38:26.908231] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:34.435 [2024-10-04 06:38:26.908238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908245] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.435 [2024-10-04 06:38:26.908252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.435 [2024-10-04 06:38:26.908269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.435 [2024-10-04 06:38:26.908330] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.435 [2024-10-04 06:38:26.908337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.435 [2024-10-04 06:38:26.908340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908344] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.435 [2024-10-04 06:38:26.908350] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:34.435 [2024-10-04 06:38:26.908359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908363] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.435 [2024-10-04 06:38:26.908374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.435 [2024-10-04 06:38:26.908390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.435 [2024-10-04 06:38:26.908461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.435 [2024-10-04 06:38:26.908467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.435 [2024-10-04 06:38:26.908470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.435 [2024-10-04 06:38:26.908474] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.435 [2024-10-04 06:38:26.908479] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:34.435 [2024-10-04 06:38:26.908484] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:34.435 [2024-10-04 06:38:26.908491] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:34.436 [2024-10-04 06:38:26.908596] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:34.436 [2024-10-04 06:38:26.908600] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:34.436 [2024-10-04 06:38:26.908608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908612] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.908622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.436 [2024-10-04 06:38:26.908640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.436 [2024-10-04 06:38:26.908702] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.436 [2024-10-04 06:38:26.908709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.436 [2024-10-04 06:38:26.908712] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908716] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.436 [2024-10-04 06:38:26.908722] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:34.436 [2024-10-04 06:38:26.908731] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908739] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.908745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.436 [2024-10-04 06:38:26.908762] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.436 [2024-10-04 06:38:26.908824] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.436 [2024-10-04 06:38:26.908831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.436 [2024-10-04 06:38:26.908834] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908838] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.436 [2024-10-04 06:38:26.908843] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:34.436 [2024-10-04 06:38:26.908848] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.908855] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:34.436 [2024-10-04 06:38:26.908883] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.908894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908898] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.908902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.908909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.436 [2024-10-04 06:38:26.908929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.436 [2024-10-04 06:38:26.909048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.436 [2024-10-04 06:38:26.909055] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.436 [2024-10-04 06:38:26.909058] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909062] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=4096, cccid=0 00:20:34.436 [2024-10-04 06:38:26.909066] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247a220) on tqpair(0x2441540): expected_datao=0, payload_size=4096 00:20:34.436 [2024-10-04 06:38:26.909074] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909079] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.436 [2024-10-04 06:38:26.909093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.436 [2024-10-04 06:38:26.909096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.436 [2024-10-04 06:38:26.909108] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:34.436 [2024-10-04 06:38:26.909113] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:34.436 [2024-10-04 06:38:26.909118] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:34.436 [2024-10-04 06:38:26.909122] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:34.436 [2024-10-04 06:38:26.909126] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:34.436 [2024-10-04 06:38:26.909131] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.909144] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.909151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909155] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.909166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:34.436 [2024-10-04 06:38:26.909185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.436 [2024-10-04 06:38:26.909258] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.436 [2024-10-04 06:38:26.909265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.436 [2024-10-04 06:38:26.909268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a220) on tqpair=0x2441540 00:20:34.436 [2024-10-04 06:38:26.909280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.909293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.436 [2024-10-04 06:38:26.909299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909302] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909306] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.909312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.436 [2024-10-04 06:38:26.909317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.909329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.436 [2024-10-04 06:38:26.909335] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909342] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.909347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.436 [2024-10-04 06:38:26.909352] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.909365] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.909372] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909379] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.909385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.436 [2024-10-04 06:38:26.909405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a220, cid 0, qid 0 00:20:34.436 [2024-10-04 06:38:26.909412] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a380, cid 1, qid 0 00:20:34.436 [2024-10-04 06:38:26.909417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a4e0, cid 2, qid 0 00:20:34.436 [2024-10-04 06:38:26.909421] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.436 [2024-10-04 06:38:26.909426] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a7a0, cid 4, qid 0 00:20:34.436 [2024-10-04 06:38:26.909537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.436 [2024-10-04 06:38:26.909544] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.436 [2024-10-04 06:38:26.909547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a7a0) on tqpair=0x2441540 00:20:34.436 [2024-10-04 06:38:26.909557] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:34.436 [2024-10-04 06:38:26.909562] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.909571] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.909582] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:34.436 [2024-10-04 06:38:26.909588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.436 [2024-10-04 06:38:26.909596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441540) 00:20:34.436 [2024-10-04 06:38:26.909603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:34.436 [2024-10-04 06:38:26.909620] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a7a0, cid 4, qid 0 00:20:34.436 [2024-10-04 06:38:26.909684] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.436 [2024-10-04 06:38:26.909690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.436 [2024-10-04 06:38:26.909693] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a7a0) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.909753] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.909763] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.909771] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909775] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441540) 00:20:34.437 [2024-10-04 06:38:26.909785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.437 [2024-10-04 06:38:26.909803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a7a0, cid 4, qid 0 00:20:34.437 [2024-10-04 06:38:26.909886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.437 [2024-10-04 06:38:26.909895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.437 [2024-10-04 06:38:26.909898] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909902] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=4096, cccid=4 00:20:34.437 [2024-10-04 06:38:26.909906] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247a7a0) on tqpair(0x2441540): expected_datao=0, payload_size=4096 00:20:34.437 [2024-10-04 06:38:26.909914] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909918] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.437 [2024-10-04 06:38:26.909931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.437 [2024-10-04 06:38:26.909934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a7a0) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.909953] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:34.437 [2024-10-04 06:38:26.909964] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.909984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.909991] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.909999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441540) 00:20:34.437 [2024-10-04 06:38:26.910006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.437 [2024-10-04 06:38:26.910026] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a7a0, cid 4, qid 0 00:20:34.437 [2024-10-04 06:38:26.910125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.437 [2024-10-04 06:38:26.910132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.437 [2024-10-04 06:38:26.910136] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910139] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=4096, cccid=4 00:20:34.437 [2024-10-04 06:38:26.910143] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247a7a0) on tqpair(0x2441540): expected_datao=0, payload_size=4096 00:20:34.437 [2024-10-04 06:38:26.910151] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910154] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910162] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.437 [2024-10-04 06:38:26.910168] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.437 [2024-10-04 06:38:26.910171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a7a0) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.910192] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910202] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441540) 00:20:34.437 [2024-10-04 06:38:26.910224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.437 [2024-10-04 06:38:26.910242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a7a0, cid 4, qid 0 00:20:34.437 [2024-10-04 06:38:26.910350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.437 [2024-10-04 06:38:26.910357] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.437 [2024-10-04 06:38:26.910360] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910364] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=4096, cccid=4 00:20:34.437 [2024-10-04 06:38:26.910368] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247a7a0) on tqpair(0x2441540): expected_datao=0, payload_size=4096 00:20:34.437 [2024-10-04 06:38:26.910375] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910379] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910387] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.437 [2024-10-04 06:38:26.910392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.437 [2024-10-04 06:38:26.910396] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910399] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a7a0) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.910408] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910416] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910426] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910433] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910443] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:34.437 [2024-10-04 06:38:26.910447] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:34.437 [2024-10-04 06:38:26.910453] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:34.437 [2024-10-04 06:38:26.910467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441540) 00:20:34.437 [2024-10-04 06:38:26.910482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.437 [2024-10-04 06:38:26.910488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910492] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910495] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2441540) 00:20:34.437 [2024-10-04 06:38:26.910501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.437 [2024-10-04 06:38:26.910524] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a7a0, cid 4, qid 0 00:20:34.437 [2024-10-04 06:38:26.910531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a900, cid 5, qid 0 00:20:34.437 [2024-10-04 06:38:26.910633] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.437 [2024-10-04 06:38:26.910639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.437 [2024-10-04 06:38:26.910642] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910646] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a7a0) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.910653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.437 [2024-10-04 06:38:26.910659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.437 [2024-10-04 06:38:26.910662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a900) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.910676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2441540) 00:20:34.437 [2024-10-04 06:38:26.910690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.437 [2024-10-04 06:38:26.910706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a900, cid 5, qid 0 00:20:34.437 [2024-10-04 06:38:26.910773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.437 [2024-10-04 06:38:26.910780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.437 [2024-10-04 06:38:26.910783] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910787] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a900) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.910797] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910805] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2441540) 00:20:34.437 [2024-10-04 06:38:26.910811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.437 [2024-10-04 06:38:26.910826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a900, cid 5, qid 0 00:20:34.437 [2024-10-04 06:38:26.910920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.437 [2024-10-04 06:38:26.910931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.437 [2024-10-04 06:38:26.910935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.437 [2024-10-04 06:38:26.910938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a900) on tqpair=0x2441540 00:20:34.437 [2024-10-04 06:38:26.910949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.910953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.910957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2441540) 00:20:34.438 [2024-10-04 06:38:26.910964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.438 [2024-10-04 06:38:26.910996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a900, cid 5, qid 0 00:20:34.438 [2024-10-04 06:38:26.911075] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.438 [2024-10-04 06:38:26.911082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.438 [2024-10-04 06:38:26.911085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a900) on tqpair=0x2441540 00:20:34.438 [2024-10-04 06:38:26.911102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911110] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2441540) 00:20:34.438 [2024-10-04 06:38:26.911117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.438 [2024-10-04 06:38:26.911124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441540) 00:20:34.438 [2024-10-04 06:38:26.911137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.438 [2024-10-04 06:38:26.911144] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911151] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2441540) 00:20:34.438 [2024-10-04 06:38:26.911157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.438 [2024-10-04 06:38:26.911164] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911168] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911171] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2441540) 00:20:34.438 [2024-10-04 06:38:26.911177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.438 [2024-10-04 06:38:26.911196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a900, cid 5, qid 0 00:20:34.438 [2024-10-04 06:38:26.911203] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a7a0, cid 4, qid 0 00:20:34.438 [2024-10-04 06:38:26.911207] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247aa60, cid 6, qid 0 00:20:34.438 [2024-10-04 06:38:26.911212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247abc0, cid 7, qid 0 00:20:34.438 [2024-10-04 06:38:26.911383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.438 [2024-10-04 06:38:26.911390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.438 [2024-10-04 06:38:26.911393] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911397] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=8192, cccid=5 00:20:34.438 [2024-10-04 06:38:26.911401] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247a900) on tqpair(0x2441540): expected_datao=0, payload_size=8192 00:20:34.438 [2024-10-04 06:38:26.911417] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911421] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.438 [2024-10-04 06:38:26.911432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.438 [2024-10-04 06:38:26.911436] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911439] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=512, cccid=4 00:20:34.438 [2024-10-04 06:38:26.911443] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247a7a0) on tqpair(0x2441540): expected_datao=0, payload_size=512 00:20:34.438 [2024-10-04 06:38:26.911450] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911453] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911459] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.438 [2024-10-04 06:38:26.911464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.438 [2024-10-04 06:38:26.911467] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911470] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=512, cccid=6 00:20:34.438 [2024-10-04 06:38:26.911474] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247aa60) on tqpair(0x2441540): expected_datao=0, payload_size=512 00:20:34.438 [2024-10-04 06:38:26.911481] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911485] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911490] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.438 [2024-10-04 06:38:26.911495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.438 [2024-10-04 06:38:26.911498] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911502] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441540): datao=0, datal=4096, cccid=7 00:20:34.438 [2024-10-04 06:38:26.911506] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247abc0) on tqpair(0x2441540): expected_datao=0, payload_size=4096 00:20:34.438 [2024-10-04 06:38:26.911519] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911523] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911530] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.438 [2024-10-04 06:38:26.911536] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.438 [2024-10-04 06:38:26.911539] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a900) on tqpair=0x2441540 00:20:34.438 [2024-10-04 06:38:26.911560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.438 [2024-10-04 06:38:26.911566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.438 [2024-10-04 06:38:26.911569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a7a0) on tqpair=0x2441540 00:20:34.438 [2024-10-04 06:38:26.911583] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.438 [2024-10-04 06:38:26.911589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.438 [2024-10-04 06:38:26.911592] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911596] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247aa60) on tqpair=0x2441540 00:20:34.438 [2024-10-04 06:38:26.911603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.438 [2024-10-04 06:38:26.911609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.438 [2024-10-04 06:38:26.911612] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.438 [2024-10-04 06:38:26.911616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247abc0) on tqpair=0x2441540 00:20:34.438 ===================================================== 00:20:34.438 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.438 ===================================================== 00:20:34.438 Controller Capabilities/Features 00:20:34.438 ================================ 00:20:34.438 Vendor ID: 8086 00:20:34.438 Subsystem Vendor ID: 8086 00:20:34.438 Serial Number: SPDK00000000000001 00:20:34.438 Model Number: SPDK bdev Controller 00:20:34.438 Firmware Version: 24.01.1 00:20:34.438 Recommended Arb Burst: 6 00:20:34.438 IEEE OUI Identifier: e4 d2 5c 00:20:34.438 Multi-path I/O 00:20:34.438 May have multiple subsystem ports: Yes 00:20:34.438 May have multiple controllers: Yes 00:20:34.438 Associated with SR-IOV VF: No 00:20:34.438 Max Data Transfer Size: 131072 00:20:34.438 Max Number of Namespaces: 32 00:20:34.438 Max Number of I/O Queues: 127 00:20:34.438 NVMe Specification Version (VS): 1.3 00:20:34.438 NVMe Specification Version (Identify): 1.3 00:20:34.438 Maximum Queue Entries: 128 00:20:34.438 Contiguous Queues Required: Yes 00:20:34.438 Arbitration Mechanisms Supported 00:20:34.438 Weighted Round Robin: Not Supported 00:20:34.438 Vendor Specific: Not Supported 00:20:34.438 Reset Timeout: 15000 ms 00:20:34.438 Doorbell Stride: 4 bytes 00:20:34.438 NVM Subsystem Reset: Not Supported 00:20:34.438 Command Sets Supported 00:20:34.438 NVM Command Set: Supported 00:20:34.438 Boot Partition: Not Supported 00:20:34.438 Memory Page Size Minimum: 4096 bytes 00:20:34.438 Memory Page Size Maximum: 4096 bytes 00:20:34.438 Persistent Memory Region: Not Supported 00:20:34.438 Optional Asynchronous Events Supported 00:20:34.438 Namespace Attribute Notices: Supported 00:20:34.438 Firmware Activation Notices: Not Supported 00:20:34.438 ANA Change Notices: Not Supported 00:20:34.438 PLE Aggregate Log Change Notices: Not Supported 00:20:34.438 LBA Status Info Alert Notices: Not Supported 00:20:34.438 EGE Aggregate Log Change Notices: Not Supported 00:20:34.438 Normal NVM Subsystem Shutdown event: Not Supported 00:20:34.438 Zone Descriptor Change Notices: Not Supported 00:20:34.438 Discovery Log Change Notices: Not Supported 00:20:34.438 Controller Attributes 00:20:34.438 128-bit Host Identifier: Supported 00:20:34.438 Non-Operational Permissive Mode: Not Supported 00:20:34.438 NVM Sets: Not Supported 00:20:34.438 Read Recovery Levels: Not Supported 00:20:34.438 Endurance Groups: Not Supported 00:20:34.438 Predictable Latency Mode: Not Supported 00:20:34.438 Traffic Based Keep ALive: Not Supported 00:20:34.438 Namespace Granularity: Not Supported 00:20:34.438 SQ Associations: Not Supported 00:20:34.438 UUID List: Not Supported 00:20:34.438 Multi-Domain Subsystem: Not Supported 00:20:34.438 Fixed Capacity Management: Not Supported 00:20:34.438 Variable Capacity Management: Not Supported 00:20:34.438 Delete Endurance Group: Not Supported 00:20:34.439 Delete NVM Set: Not Supported 00:20:34.439 Extended LBA Formats Supported: Not Supported 00:20:34.439 Flexible Data Placement Supported: Not Supported 00:20:34.439 00:20:34.439 Controller Memory Buffer Support 00:20:34.439 ================================ 00:20:34.439 Supported: No 00:20:34.439 00:20:34.439 Persistent Memory Region Support 00:20:34.439 ================================ 00:20:34.439 Supported: No 00:20:34.439 00:20:34.439 Admin Command Set Attributes 00:20:34.439 ============================ 00:20:34.439 Security Send/Receive: Not Supported 00:20:34.439 Format NVM: Not Supported 00:20:34.439 Firmware Activate/Download: Not Supported 00:20:34.439 Namespace Management: Not Supported 00:20:34.439 Device Self-Test: Not Supported 00:20:34.439 Directives: Not Supported 00:20:34.439 NVMe-MI: Not Supported 00:20:34.439 Virtualization Management: Not Supported 00:20:34.439 Doorbell Buffer Config: Not Supported 00:20:34.439 Get LBA Status Capability: Not Supported 00:20:34.439 Command & Feature Lockdown Capability: Not Supported 00:20:34.439 Abort Command Limit: 4 00:20:34.439 Async Event Request Limit: 4 00:20:34.439 Number of Firmware Slots: N/A 00:20:34.439 Firmware Slot 1 Read-Only: N/A 00:20:34.439 Firmware Activation Without Reset: N/A 00:20:34.439 Multiple Update Detection Support: N/A 00:20:34.439 Firmware Update Granularity: No Information Provided 00:20:34.439 Per-Namespace SMART Log: No 00:20:34.439 Asymmetric Namespace Access Log Page: Not Supported 00:20:34.439 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:34.439 Command Effects Log Page: Supported 00:20:34.439 Get Log Page Extended Data: Supported 00:20:34.439 Telemetry Log Pages: Not Supported 00:20:34.439 Persistent Event Log Pages: Not Supported 00:20:34.439 Supported Log Pages Log Page: May Support 00:20:34.439 Commands Supported & Effects Log Page: Not Supported 00:20:34.439 Feature Identifiers & Effects Log Page:May Support 00:20:34.439 NVMe-MI Commands & Effects Log Page: May Support 00:20:34.439 Data Area 4 for Telemetry Log: Not Supported 00:20:34.439 Error Log Page Entries Supported: 128 00:20:34.439 Keep Alive: Supported 00:20:34.439 Keep Alive Granularity: 10000 ms 00:20:34.439 00:20:34.439 NVM Command Set Attributes 00:20:34.439 ========================== 00:20:34.439 Submission Queue Entry Size 00:20:34.439 Max: 64 00:20:34.439 Min: 64 00:20:34.439 Completion Queue Entry Size 00:20:34.439 Max: 16 00:20:34.439 Min: 16 00:20:34.439 Number of Namespaces: 32 00:20:34.439 Compare Command: Supported 00:20:34.439 Write Uncorrectable Command: Not Supported 00:20:34.439 Dataset Management Command: Supported 00:20:34.439 Write Zeroes Command: Supported 00:20:34.439 Set Features Save Field: Not Supported 00:20:34.439 Reservations: Supported 00:20:34.439 Timestamp: Not Supported 00:20:34.439 Copy: Supported 00:20:34.439 Volatile Write Cache: Present 00:20:34.439 Atomic Write Unit (Normal): 1 00:20:34.439 Atomic Write Unit (PFail): 1 00:20:34.439 Atomic Compare & Write Unit: 1 00:20:34.439 Fused Compare & Write: Supported 00:20:34.439 Scatter-Gather List 00:20:34.439 SGL Command Set: Supported 00:20:34.439 SGL Keyed: Supported 00:20:34.439 SGL Bit Bucket Descriptor: Not Supported 00:20:34.439 SGL Metadata Pointer: Not Supported 00:20:34.439 Oversized SGL: Not Supported 00:20:34.439 SGL Metadata Address: Not Supported 00:20:34.439 SGL Offset: Supported 00:20:34.439 Transport SGL Data Block: Not Supported 00:20:34.439 Replay Protected Memory Block: Not Supported 00:20:34.439 00:20:34.439 Firmware Slot Information 00:20:34.439 ========================= 00:20:34.439 Active slot: 1 00:20:34.439 Slot 1 Firmware Revision: 24.01.1 00:20:34.439 00:20:34.439 00:20:34.439 Commands Supported and Effects 00:20:34.439 ============================== 00:20:34.439 Admin Commands 00:20:34.439 -------------- 00:20:34.439 Get Log Page (02h): Supported 00:20:34.439 Identify (06h): Supported 00:20:34.439 Abort (08h): Supported 00:20:34.439 Set Features (09h): Supported 00:20:34.439 Get Features (0Ah): Supported 00:20:34.439 Asynchronous Event Request (0Ch): Supported 00:20:34.439 Keep Alive (18h): Supported 00:20:34.439 I/O Commands 00:20:34.439 ------------ 00:20:34.439 Flush (00h): Supported LBA-Change 00:20:34.439 Write (01h): Supported LBA-Change 00:20:34.439 Read (02h): Supported 00:20:34.439 Compare (05h): Supported 00:20:34.439 Write Zeroes (08h): Supported LBA-Change 00:20:34.439 Dataset Management (09h): Supported LBA-Change 00:20:34.439 Copy (19h): Supported LBA-Change 00:20:34.439 Unknown (79h): Supported LBA-Change 00:20:34.439 Unknown (7Ah): Supported 00:20:34.439 00:20:34.439 Error Log 00:20:34.439 ========= 00:20:34.439 00:20:34.439 Arbitration 00:20:34.439 =========== 00:20:34.439 Arbitration Burst: 1 00:20:34.439 00:20:34.439 Power Management 00:20:34.439 ================ 00:20:34.439 Number of Power States: 1 00:20:34.439 Current Power State: Power State #0 00:20:34.439 Power State #0: 00:20:34.439 Max Power: 0.00 W 00:20:34.439 Non-Operational State: Operational 00:20:34.439 Entry Latency: Not Reported 00:20:34.439 Exit Latency: Not Reported 00:20:34.439 Relative Read Throughput: 0 00:20:34.439 Relative Read Latency: 0 00:20:34.439 Relative Write Throughput: 0 00:20:34.439 Relative Write Latency: 0 00:20:34.439 Idle Power: Not Reported 00:20:34.439 Active Power: Not Reported 00:20:34.439 Non-Operational Permissive Mode: Not Supported 00:20:34.439 00:20:34.439 Health Information 00:20:34.439 ================== 00:20:34.439 Critical Warnings: 00:20:34.439 Available Spare Space: OK 00:20:34.439 Temperature: OK 00:20:34.439 Device Reliability: OK 00:20:34.439 Read Only: No 00:20:34.439 Volatile Memory Backup: OK 00:20:34.439 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:34.439 Temperature Threshold: [2024-10-04 06:38:26.911732] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.439 [2024-10-04 06:38:26.911739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.439 [2024-10-04 06:38:26.911742] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2441540) 00:20:34.439 [2024-10-04 06:38:26.911749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.439 [2024-10-04 06:38:26.911771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247abc0, cid 7, qid 0 00:20:34.439 [2024-10-04 06:38:26.915850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.439 [2024-10-04 06:38:26.915870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.439 [2024-10-04 06:38:26.915874] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.439 [2024-10-04 06:38:26.915878] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247abc0) on tqpair=0x2441540 00:20:34.439 [2024-10-04 06:38:26.915913] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:34.439 [2024-10-04 06:38:26.915926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.439 [2024-10-04 06:38:26.915933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.439 [2024-10-04 06:38:26.915939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.439 [2024-10-04 06:38:26.915944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.439 [2024-10-04 06:38:26.915953] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.439 [2024-10-04 06:38:26.915957] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.439 [2024-10-04 06:38:26.915960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.439 [2024-10-04 06:38:26.915968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.915991] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.916088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.916094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.916098] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916101] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.916109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.916123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.916143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.916227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.916233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.916236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916256] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.916261] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:34.440 [2024-10-04 06:38:26.916266] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:34.440 [2024-10-04 06:38:26.916275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916279] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.916289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.916305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.916366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.916372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.916376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.916390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.916404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.916419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.916481] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.916487] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.916490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916494] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.916504] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916508] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.916518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.916534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.916607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.916626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.916630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.916645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.916660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.916678] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.916753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.916759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.916763] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.916777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916784] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.916791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.916807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.916894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.916906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.916910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.916924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.916932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.916939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.916957] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.917020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.917026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.917029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.917043] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917047] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917050] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.917056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.917072] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.917160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.917170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.917174] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.917188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917196] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.917202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.917218] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.917297] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.917304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.917307] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.917321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.917335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.917351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.917416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.917427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.917431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917435] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.917445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.440 [2024-10-04 06:38:26.917460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.440 [2024-10-04 06:38:26.917476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.440 [2024-10-04 06:38:26.917554] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.440 [2024-10-04 06:38:26.917561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.440 [2024-10-04 06:38:26.917564] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917568] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.440 [2024-10-04 06:38:26.917578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.440 [2024-10-04 06:38:26.917582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.917592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.917617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.917688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.917709] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.917713] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917717] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.917727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.917743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.917759] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.917816] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.917822] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.917825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917829] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.917839] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917843] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.917869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.917888] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.917958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.917965] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.917968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.917982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.917990] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.917996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918013] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.918072] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.918080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.918083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.918097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918105] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.918111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.918206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.918212] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.918216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.918229] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918237] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.918243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.918335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.918346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.918350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918354] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.918364] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918369] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918373] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.918379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918396] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.918454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.918461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.918464] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.918478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.918492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.918574] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.918581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.918584] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.918598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918602] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918605] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.918612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918628] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.918697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.918707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.918711] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918715] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.918726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918730] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918734] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.918740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.918860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.918872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.918876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.918890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918895] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.918898] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.918905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.918924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.919001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.919027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.919031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.919035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.919046] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.919050] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.919054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.441 [2024-10-04 06:38:26.919061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.441 [2024-10-04 06:38:26.919079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.441 [2024-10-04 06:38:26.919150] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.441 [2024-10-04 06:38:26.919156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.441 [2024-10-04 06:38:26.919160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.919164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.441 [2024-10-04 06:38:26.919174] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.441 [2024-10-04 06:38:26.919178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919181] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.442 [2024-10-04 06:38:26.919188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.442 [2024-10-04 06:38:26.919204] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.442 [2024-10-04 06:38:26.919278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.442 [2024-10-04 06:38:26.919289] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.442 [2024-10-04 06:38:26.919293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919297] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.442 [2024-10-04 06:38:26.919307] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919315] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.442 [2024-10-04 06:38:26.919322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.442 [2024-10-04 06:38:26.919339] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.442 [2024-10-04 06:38:26.919396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.442 [2024-10-04 06:38:26.919403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.442 [2024-10-04 06:38:26.919407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919411] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.442 [2024-10-04 06:38:26.919421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.442 [2024-10-04 06:38:26.919435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.442 [2024-10-04 06:38:26.919451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.442 [2024-10-04 06:38:26.919511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.442 [2024-10-04 06:38:26.919521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.442 [2024-10-04 06:38:26.919525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.442 [2024-10-04 06:38:26.919539] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919544] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.442 [2024-10-04 06:38:26.919569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.442 [2024-10-04 06:38:26.919585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.442 [2024-10-04 06:38:26.919641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.442 [2024-10-04 06:38:26.919647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.442 [2024-10-04 06:38:26.919650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919654] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.442 [2024-10-04 06:38:26.919664] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919668] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.442 [2024-10-04 06:38:26.919678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.442 [2024-10-04 06:38:26.919693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.442 [2024-10-04 06:38:26.919754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.442 [2024-10-04 06:38:26.919764] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.442 [2024-10-04 06:38:26.919768] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919771] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.442 [2024-10-04 06:38:26.919782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919786] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.919789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.442 [2024-10-04 06:38:26.919796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.442 [2024-10-04 06:38:26.919812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.442 [2024-10-04 06:38:26.923877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.442 [2024-10-04 06:38:26.923887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.442 [2024-10-04 06:38:26.923891] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.923895] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.442 [2024-10-04 06:38:26.923908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.923912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.923916] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441540) 00:20:34.442 [2024-10-04 06:38:26.923924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.442 [2024-10-04 06:38:26.923946] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247a640, cid 3, qid 0 00:20:34.442 [2024-10-04 06:38:26.924010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.442 [2024-10-04 06:38:26.924016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.442 [2024-10-04 06:38:26.924019] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.442 [2024-10-04 06:38:26.924023] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x247a640) on tqpair=0x2441540 00:20:34.442 [2024-10-04 06:38:26.924059] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:34.442 0 Kelvin (-273 Celsius) 00:20:34.442 Available Spare: 0% 00:20:34.442 Available Spare Threshold: 0% 00:20:34.442 Life Percentage Used: 0% 00:20:34.442 Data Units Read: 0 00:20:34.442 Data Units Written: 0 00:20:34.442 Host Read Commands: 0 00:20:34.442 Host Write Commands: 0 00:20:34.442 Controller Busy Time: 0 minutes 00:20:34.442 Power Cycles: 0 00:20:34.442 Power On Hours: 0 hours 00:20:34.442 Unsafe Shutdowns: 0 00:20:34.442 Unrecoverable Media Errors: 0 00:20:34.442 Lifetime Error Log Entries: 0 00:20:34.442 Warning Temperature Time: 0 minutes 00:20:34.442 Critical Temperature Time: 0 minutes 00:20:34.442 00:20:34.442 Number of Queues 00:20:34.442 ================ 00:20:34.442 Number of I/O Submission Queues: 127 00:20:34.442 Number of I/O Completion Queues: 127 00:20:34.442 00:20:34.442 Active Namespaces 00:20:34.442 ================= 00:20:34.442 Namespace ID:1 00:20:34.442 Error Recovery Timeout: Unlimited 00:20:34.442 Command Set Identifier: NVM (00h) 00:20:34.442 Deallocate: Supported 00:20:34.442 Deallocated/Unwritten Error: Not Supported 00:20:34.442 Deallocated Read Value: Unknown 00:20:34.442 Deallocate in Write Zeroes: Not Supported 00:20:34.442 Deallocated Guard Field: 0xFFFF 00:20:34.442 Flush: Supported 00:20:34.442 Reservation: Supported 00:20:34.442 Namespace Sharing Capabilities: Multiple Controllers 00:20:34.442 Size (in LBAs): 131072 (0GiB) 00:20:34.442 Capacity (in LBAs): 131072 (0GiB) 00:20:34.442 Utilization (in LBAs): 131072 (0GiB) 00:20:34.442 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:34.442 EUI64: ABCDEF0123456789 00:20:34.442 UUID: dfbd2966-6dfa-414b-a38b-b79780e3ff8e 00:20:34.442 Thin Provisioning: Not Supported 00:20:34.442 Per-NS Atomic Units: Yes 00:20:34.442 Atomic Boundary Size (Normal): 0 00:20:34.442 Atomic Boundary Size (PFail): 0 00:20:34.442 Atomic Boundary Offset: 0 00:20:34.442 Maximum Single Source Range Length: 65535 00:20:34.442 Maximum Copy Length: 65535 00:20:34.442 Maximum Source Range Count: 1 00:20:34.442 NGUID/EUI64 Never Reused: No 00:20:34.442 Namespace Write Protected: No 00:20:34.442 Number of LBA Formats: 1 00:20:34.442 Current LBA Format: LBA Format #00 00:20:34.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:34.442 00:20:34.442 06:38:26 -- host/identify.sh@51 -- # sync 00:20:34.442 06:38:26 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.442 06:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:34.442 06:38:27 -- common/autotest_common.sh@10 -- # set +x 00:20:34.442 06:38:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:34.442 06:38:27 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:34.442 06:38:27 -- host/identify.sh@56 -- # nvmftestfini 00:20:34.442 06:38:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:34.442 06:38:27 -- nvmf/common.sh@116 -- # sync 00:20:34.442 06:38:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:34.442 06:38:27 -- nvmf/common.sh@119 -- # set +e 00:20:34.442 06:38:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:34.442 06:38:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:34.442 rmmod nvme_tcp 00:20:34.442 rmmod nvme_fabrics 00:20:34.442 rmmod nvme_keyring 00:20:34.442 06:38:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:34.442 06:38:27 -- nvmf/common.sh@123 -- # set -e 00:20:34.442 06:38:27 -- nvmf/common.sh@124 -- # return 0 00:20:34.442 06:38:27 -- nvmf/common.sh@477 -- # '[' -n 93044 ']' 00:20:34.442 06:38:27 -- nvmf/common.sh@478 -- # killprocess 93044 00:20:34.442 06:38:27 -- common/autotest_common.sh@926 -- # '[' -z 93044 ']' 00:20:34.442 06:38:27 -- common/autotest_common.sh@930 -- # kill -0 93044 00:20:34.442 06:38:27 -- common/autotest_common.sh@931 -- # uname 00:20:34.442 06:38:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.442 06:38:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93044 00:20:34.442 06:38:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:34.443 06:38:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:34.443 killing process with pid 93044 00:20:34.443 06:38:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93044' 00:20:34.443 06:38:27 -- common/autotest_common.sh@945 -- # kill 93044 00:20:34.443 [2024-10-04 06:38:27.108453] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:34.443 06:38:27 -- common/autotest_common.sh@950 -- # wait 93044 00:20:35.010 06:38:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:35.010 06:38:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:35.010 06:38:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:35.010 06:38:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.010 06:38:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:35.010 06:38:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.010 06:38:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.010 06:38:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.010 06:38:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:35.010 00:20:35.010 real 0m2.669s 00:20:35.010 user 0m7.637s 00:20:35.010 sys 0m0.736s 00:20:35.010 06:38:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.010 06:38:27 -- common/autotest_common.sh@10 -- # set +x 00:20:35.010 ************************************ 00:20:35.010 END TEST nvmf_identify 00:20:35.010 ************************************ 00:20:35.010 06:38:27 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:35.010 06:38:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:35.010 06:38:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:35.010 06:38:27 -- common/autotest_common.sh@10 -- # set +x 00:20:35.010 ************************************ 00:20:35.010 START TEST nvmf_perf 00:20:35.010 ************************************ 00:20:35.010 06:38:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:35.010 * Looking for test storage... 00:20:35.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:35.010 06:38:27 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:35.010 06:38:27 -- nvmf/common.sh@7 -- # uname -s 00:20:35.010 06:38:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.010 06:38:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.010 06:38:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.010 06:38:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.010 06:38:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.010 06:38:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.010 06:38:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.010 06:38:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.010 06:38:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.010 06:38:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.011 06:38:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:35.011 06:38:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:20:35.011 06:38:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.011 06:38:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.011 06:38:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:35.011 06:38:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:35.011 06:38:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.011 06:38:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.011 06:38:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.011 06:38:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.011 06:38:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.011 06:38:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.011 06:38:27 -- paths/export.sh@5 -- # export PATH 00:20:35.011 06:38:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.011 06:38:27 -- nvmf/common.sh@46 -- # : 0 00:20:35.011 06:38:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:35.011 06:38:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:35.011 06:38:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:35.011 06:38:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.011 06:38:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.011 06:38:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:35.011 06:38:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:35.011 06:38:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:35.011 06:38:27 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:35.011 06:38:27 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:35.011 06:38:27 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.011 06:38:27 -- host/perf.sh@17 -- # nvmftestinit 00:20:35.011 06:38:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:35.011 06:38:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.011 06:38:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:35.011 06:38:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:35.011 06:38:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:35.011 06:38:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.011 06:38:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.011 06:38:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.011 06:38:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:35.011 06:38:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:35.011 06:38:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:35.011 06:38:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:35.011 06:38:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:35.011 06:38:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:35.011 06:38:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.011 06:38:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.011 06:38:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:35.011 06:38:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:35.011 06:38:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:35.011 06:38:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:35.011 06:38:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:35.011 06:38:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.011 06:38:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:35.011 06:38:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:35.011 06:38:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:35.011 06:38:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:35.011 06:38:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:35.011 06:38:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:35.011 Cannot find device "nvmf_tgt_br" 00:20:35.011 06:38:27 -- nvmf/common.sh@154 -- # true 00:20:35.011 06:38:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.011 Cannot find device "nvmf_tgt_br2" 00:20:35.011 06:38:27 -- nvmf/common.sh@155 -- # true 00:20:35.011 06:38:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:35.011 06:38:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:35.011 Cannot find device "nvmf_tgt_br" 00:20:35.011 06:38:27 -- nvmf/common.sh@157 -- # true 00:20:35.011 06:38:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:35.011 Cannot find device "nvmf_tgt_br2" 00:20:35.011 06:38:27 -- nvmf/common.sh@158 -- # true 00:20:35.011 06:38:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:35.270 06:38:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:35.270 06:38:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.270 06:38:27 -- nvmf/common.sh@161 -- # true 00:20:35.270 06:38:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.270 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.270 06:38:27 -- nvmf/common.sh@162 -- # true 00:20:35.270 06:38:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:35.270 06:38:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:35.270 06:38:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.270 06:38:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.270 06:38:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.270 06:38:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.270 06:38:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.270 06:38:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:35.270 06:38:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:35.270 06:38:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:35.270 06:38:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:35.270 06:38:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:35.270 06:38:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:35.270 06:38:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.270 06:38:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:35.270 06:38:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:35.270 06:38:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:35.270 06:38:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:35.270 06:38:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:35.270 06:38:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:35.270 06:38:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:35.270 06:38:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.270 06:38:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.270 06:38:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:35.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:20:35.270 00:20:35.270 --- 10.0.0.2 ping statistics --- 00:20:35.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.270 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:35.270 06:38:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:35.270 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.270 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:35.270 00:20:35.270 --- 10.0.0.3 ping statistics --- 00:20:35.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.270 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:35.270 06:38:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:35.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:35.270 00:20:35.270 --- 10.0.0.1 ping statistics --- 00:20:35.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.270 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:35.270 06:38:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.270 06:38:27 -- nvmf/common.sh@421 -- # return 0 00:20:35.270 06:38:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:35.270 06:38:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.270 06:38:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:35.270 06:38:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:35.270 06:38:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.270 06:38:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:35.270 06:38:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:35.270 06:38:27 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:35.270 06:38:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:35.270 06:38:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:35.270 06:38:27 -- common/autotest_common.sh@10 -- # set +x 00:20:35.529 06:38:27 -- nvmf/common.sh@469 -- # nvmfpid=93263 00:20:35.529 06:38:27 -- nvmf/common.sh@470 -- # waitforlisten 93263 00:20:35.529 06:38:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:35.529 06:38:27 -- common/autotest_common.sh@819 -- # '[' -z 93263 ']' 00:20:35.529 06:38:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.529 06:38:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.529 06:38:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.529 06:38:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.529 06:38:27 -- common/autotest_common.sh@10 -- # set +x 00:20:35.529 [2024-10-04 06:38:28.008512] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:20:35.529 [2024-10-04 06:38:28.008597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.529 [2024-10-04 06:38:28.151736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.786 [2024-10-04 06:38:28.227017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.787 [2024-10-04 06:38:28.227185] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.787 [2024-10-04 06:38:28.227202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.787 [2024-10-04 06:38:28.227213] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.787 [2024-10-04 06:38:28.227354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.787 [2024-10-04 06:38:28.227500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.787 [2024-10-04 06:38:28.227618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.787 [2024-10-04 06:38:28.227624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.717 06:38:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.717 06:38:29 -- common/autotest_common.sh@852 -- # return 0 00:20:36.717 06:38:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.717 06:38:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:36.717 06:38:29 -- common/autotest_common.sh@10 -- # set +x 00:20:36.717 06:38:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.717 06:38:29 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:36.717 06:38:29 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:36.974 06:38:29 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:36.974 06:38:29 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:37.231 06:38:29 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:37.231 06:38:29 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:37.489 06:38:30 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:37.489 06:38:30 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:37.489 06:38:30 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:37.489 06:38:30 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:37.489 06:38:30 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:37.746 [2024-10-04 06:38:30.270939] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.746 06:38:30 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.003 06:38:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:38.003 06:38:30 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:38.261 06:38:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:38.261 06:38:30 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:38.519 06:38:31 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.777 [2024-10-04 06:38:31.272789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.777 06:38:31 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:39.034 06:38:31 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:39.034 06:38:31 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:39.034 06:38:31 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:39.034 06:38:31 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:39.968 Initializing NVMe Controllers 00:20:39.968 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:39.968 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:39.968 Initialization complete. Launching workers. 00:20:39.968 ======================================================== 00:20:39.968 Latency(us) 00:20:39.968 Device Information : IOPS MiB/s Average min max 00:20:39.968 PCIE (0000:00:06.0) NSID 1 from core 0: 23511.97 91.84 1360.95 299.11 7992.33 00:20:39.968 ======================================================== 00:20:39.968 Total : 23511.97 91.84 1360.95 299.11 7992.33 00:20:39.968 00:20:39.968 06:38:32 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.341 Initializing NVMe Controllers 00:20:41.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:41.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:41.341 Initialization complete. Launching workers. 00:20:41.341 ======================================================== 00:20:41.341 Latency(us) 00:20:41.341 Device Information : IOPS MiB/s Average min max 00:20:41.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3540.44 13.83 281.07 103.82 4290.19 00:20:41.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.63 0.48 8154.14 7036.34 12028.95 00:20:41.341 ======================================================== 00:20:41.341 Total : 3663.07 14.31 544.65 103.82 12028.95 00:20:41.341 00:20:41.341 06:38:33 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.717 [2024-10-04 06:38:35.207907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.207968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.207978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.207987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.207995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.208002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.208009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.208016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 [2024-10-04 06:38:35.208024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb19d0 is same with the state(5) to be set 00:20:42.717 Initializing NVMe Controllers 00:20:42.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:42.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:42.717 Initialization complete. Launching workers. 00:20:42.717 ======================================================== 00:20:42.717 Latency(us) 00:20:42.717 Device Information : IOPS MiB/s Average min max 00:20:42.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8806.99 34.40 3637.28 539.20 8403.33 00:20:42.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2643.00 10.32 12194.22 6463.96 23119.94 00:20:42.717 ======================================================== 00:20:42.717 Total : 11449.99 44.73 5612.47 539.20 23119.94 00:20:42.717 00:20:42.717 06:38:35 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:42.717 06:38:35 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.386 Initializing NVMe Controllers 00:20:45.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.386 Controller IO queue size 128, less than required. 00:20:45.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:45.386 Controller IO queue size 128, less than required. 00:20:45.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:45.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:45.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:45.386 Initialization complete. Launching workers. 00:20:45.386 ======================================================== 00:20:45.386 Latency(us) 00:20:45.386 Device Information : IOPS MiB/s Average min max 00:20:45.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1561.05 390.26 83331.59 46702.28 155114.57 00:20:45.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.44 149.86 215321.30 99120.99 336838.07 00:20:45.386 ======================================================== 00:20:45.386 Total : 2160.49 540.12 119953.01 46702.28 336838.07 00:20:45.386 00:20:45.386 06:38:37 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:45.386 No valid NVMe controllers or AIO or URING devices found 00:20:45.386 Initializing NVMe Controllers 00:20:45.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:45.386 Controller IO queue size 128, less than required. 00:20:45.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:45.386 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:45.386 Controller IO queue size 128, less than required. 00:20:45.386 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:45.386 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:45.386 WARNING: Some requested NVMe devices were skipped 00:20:45.386 06:38:37 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:47.959 Initializing NVMe Controllers 00:20:47.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.959 Controller IO queue size 128, less than required. 00:20:47.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.959 Controller IO queue size 128, less than required. 00:20:47.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.959 Initialization complete. Launching workers. 00:20:47.959 00:20:47.959 ==================== 00:20:47.959 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:47.959 TCP transport: 00:20:47.959 polls: 8149 00:20:47.959 idle_polls: 4979 00:20:47.959 sock_completions: 3170 00:20:47.959 nvme_completions: 4716 00:20:47.959 submitted_requests: 7262 00:20:47.959 queued_requests: 1 00:20:47.959 00:20:47.959 ==================== 00:20:47.959 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:47.959 TCP transport: 00:20:47.959 polls: 8471 00:20:47.959 idle_polls: 5394 00:20:47.959 sock_completions: 3077 00:20:47.959 nvme_completions: 6153 00:20:47.959 submitted_requests: 9365 00:20:47.959 queued_requests: 1 00:20:47.959 ======================================================== 00:20:47.959 Latency(us) 00:20:47.959 Device Information : IOPS MiB/s Average min max 00:20:47.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1241.08 310.27 106856.52 68147.27 177839.86 00:20:47.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1599.67 399.92 80504.77 48788.05 135314.35 00:20:47.959 ======================================================== 00:20:47.959 Total : 2840.76 710.19 92017.44 48788.05 177839.86 00:20:47.959 00:20:47.959 06:38:40 -- host/perf.sh@66 -- # sync 00:20:47.959 06:38:40 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.218 06:38:40 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:48.218 06:38:40 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:48.218 06:38:40 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:48.476 06:38:41 -- host/perf.sh@72 -- # ls_guid=df9d6c6f-42fc-4faf-bef3-86a52e9075ab 00:20:48.476 06:38:41 -- host/perf.sh@73 -- # get_lvs_free_mb df9d6c6f-42fc-4faf-bef3-86a52e9075ab 00:20:48.476 06:38:41 -- common/autotest_common.sh@1343 -- # local lvs_uuid=df9d6c6f-42fc-4faf-bef3-86a52e9075ab 00:20:48.476 06:38:41 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:48.476 06:38:41 -- common/autotest_common.sh@1345 -- # local fc 00:20:48.476 06:38:41 -- common/autotest_common.sh@1346 -- # local cs 00:20:48.476 06:38:41 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:48.735 06:38:41 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:48.735 { 00:20:48.735 "base_bdev": "Nvme0n1", 00:20:48.735 "block_size": 4096, 00:20:48.735 "cluster_size": 4194304, 00:20:48.735 "free_clusters": 1278, 00:20:48.735 "name": "lvs_0", 00:20:48.735 "total_data_clusters": 1278, 00:20:48.735 "uuid": "df9d6c6f-42fc-4faf-bef3-86a52e9075ab" 00:20:48.735 } 00:20:48.735 ]' 00:20:48.735 06:38:41 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="df9d6c6f-42fc-4faf-bef3-86a52e9075ab") .free_clusters' 00:20:48.993 06:38:41 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:48.993 06:38:41 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="df9d6c6f-42fc-4faf-bef3-86a52e9075ab") .cluster_size' 00:20:48.993 5112 00:20:48.993 06:38:41 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:48.993 06:38:41 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:48.993 06:38:41 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:48.993 06:38:41 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:48.993 06:38:41 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u df9d6c6f-42fc-4faf-bef3-86a52e9075ab lbd_0 5112 00:20:49.252 06:38:41 -- host/perf.sh@80 -- # lb_guid=f73bb533-1b7a-44dc-80d0-2be7fbfd8b31 00:20:49.252 06:38:41 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f73bb533-1b7a-44dc-80d0-2be7fbfd8b31 lvs_n_0 00:20:49.819 06:38:42 -- host/perf.sh@83 -- # ls_nested_guid=2d14adae-49c1-483e-9a7f-d8c5efe7b550 00:20:49.819 06:38:42 -- host/perf.sh@84 -- # get_lvs_free_mb 2d14adae-49c1-483e-9a7f-d8c5efe7b550 00:20:49.819 06:38:42 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2d14adae-49c1-483e-9a7f-d8c5efe7b550 00:20:49.819 06:38:42 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:49.819 06:38:42 -- common/autotest_common.sh@1345 -- # local fc 00:20:49.819 06:38:42 -- common/autotest_common.sh@1346 -- # local cs 00:20:49.819 06:38:42 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:49.819 06:38:42 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:49.819 { 00:20:49.819 "base_bdev": "Nvme0n1", 00:20:49.819 "block_size": 4096, 00:20:49.819 "cluster_size": 4194304, 00:20:49.819 "free_clusters": 0, 00:20:49.819 "name": "lvs_0", 00:20:49.819 "total_data_clusters": 1278, 00:20:49.819 "uuid": "df9d6c6f-42fc-4faf-bef3-86a52e9075ab" 00:20:49.819 }, 00:20:49.819 { 00:20:49.819 "base_bdev": "f73bb533-1b7a-44dc-80d0-2be7fbfd8b31", 00:20:49.819 "block_size": 4096, 00:20:49.819 "cluster_size": 4194304, 00:20:49.819 "free_clusters": 1276, 00:20:49.819 "name": "lvs_n_0", 00:20:49.819 "total_data_clusters": 1276, 00:20:49.819 "uuid": "2d14adae-49c1-483e-9a7f-d8c5efe7b550" 00:20:49.819 } 00:20:49.819 ]' 00:20:49.819 06:38:42 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2d14adae-49c1-483e-9a7f-d8c5efe7b550") .free_clusters' 00:20:50.078 06:38:42 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:50.078 06:38:42 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2d14adae-49c1-483e-9a7f-d8c5efe7b550") .cluster_size' 00:20:50.078 06:38:42 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:50.078 06:38:42 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:50.078 5104 00:20:50.078 06:38:42 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:50.078 06:38:42 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:50.078 06:38:42 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2d14adae-49c1-483e-9a7f-d8c5efe7b550 lbd_nest_0 5104 00:20:50.355 06:38:42 -- host/perf.sh@88 -- # lb_nested_guid=665af428-5320-43d0-916e-beb2dd6e6582 00:20:50.355 06:38:42 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.614 06:38:43 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:50.614 06:38:43 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 665af428-5320-43d0-916e-beb2dd6e6582 00:20:50.873 06:38:43 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.131 06:38:43 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:51.131 06:38:43 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:51.131 06:38:43 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:51.131 06:38:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:51.131 06:38:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:51.389 No valid NVMe controllers or AIO or URING devices found 00:20:51.389 Initializing NVMe Controllers 00:20:51.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.389 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:51.389 WARNING: Some requested NVMe devices were skipped 00:20:51.389 06:38:43 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:51.389 06:38:43 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.601 Initializing NVMe Controllers 00:21:03.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:03.601 Initialization complete. Launching workers. 00:21:03.601 ======================================================== 00:21:03.601 Latency(us) 00:21:03.601 Device Information : IOPS MiB/s Average min max 00:21:03.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 807.95 100.99 1237.33 413.78 7742.45 00:21:03.601 ======================================================== 00:21:03.601 Total : 807.95 100.99 1237.33 413.78 7742.45 00:21:03.601 00:21:03.601 06:38:54 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:03.601 06:38:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:03.601 06:38:54 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.601 No valid NVMe controllers or AIO or URING devices found 00:21:03.601 Initializing NVMe Controllers 00:21:03.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.601 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:03.601 WARNING: Some requested NVMe devices were skipped 00:21:03.601 06:38:54 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:03.601 06:38:54 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.601 [2024-10-04 06:39:04.724869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd539a0 is same with the state(5) to be set 00:21:13.601 [2024-10-04 06:39:04.724963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd539a0 is same with the state(5) to be set 00:21:13.601 [2024-10-04 06:39:04.724986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd539a0 is same with the state(5) to be set 00:21:13.601 Initializing NVMe Controllers 00:21:13.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:13.601 Initialization complete. Launching workers. 00:21:13.601 ======================================================== 00:21:13.601 Latency(us) 00:21:13.601 Device Information : IOPS MiB/s Average min max 00:21:13.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1111.70 138.96 28812.70 8019.77 281796.72 00:21:13.601 ======================================================== 00:21:13.601 Total : 1111.70 138.96 28812.70 8019.77 281796.72 00:21:13.601 00:21:13.601 06:39:04 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:13.601 06:39:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:13.601 06:39:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.601 No valid NVMe controllers or AIO or URING devices found 00:21:13.601 Initializing NVMe Controllers 00:21:13.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.601 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:13.601 WARNING: Some requested NVMe devices were skipped 00:21:13.601 06:39:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:13.601 06:39:05 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.580 Initializing NVMe Controllers 00:21:23.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.580 Controller IO queue size 128, less than required. 00:21:23.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.580 Initialization complete. Launching workers. 00:21:23.580 ======================================================== 00:21:23.580 Latency(us) 00:21:23.580 Device Information : IOPS MiB/s Average min max 00:21:23.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3471.25 433.91 36879.47 14094.31 76816.86 00:21:23.580 ======================================================== 00:21:23.580 Total : 3471.25 433.91 36879.47 14094.31 76816.86 00:21:23.580 00:21:23.580 06:39:15 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.580 06:39:15 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 665af428-5320-43d0-916e-beb2dd6e6582 00:21:23.580 06:39:16 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:23.838 06:39:16 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f73bb533-1b7a-44dc-80d0-2be7fbfd8b31 00:21:24.099 06:39:16 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:24.359 06:39:16 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:24.359 06:39:16 -- host/perf.sh@114 -- # nvmftestfini 00:21:24.359 06:39:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:24.359 06:39:16 -- nvmf/common.sh@116 -- # sync 00:21:24.359 06:39:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:24.359 06:39:17 -- nvmf/common.sh@119 -- # set +e 00:21:24.359 06:39:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:24.359 06:39:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:24.359 rmmod nvme_tcp 00:21:24.359 rmmod nvme_fabrics 00:21:24.618 rmmod nvme_keyring 00:21:24.618 06:39:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:24.618 06:39:17 -- nvmf/common.sh@123 -- # set -e 00:21:24.618 06:39:17 -- nvmf/common.sh@124 -- # return 0 00:21:24.618 06:39:17 -- nvmf/common.sh@477 -- # '[' -n 93263 ']' 00:21:24.618 06:39:17 -- nvmf/common.sh@478 -- # killprocess 93263 00:21:24.618 06:39:17 -- common/autotest_common.sh@926 -- # '[' -z 93263 ']' 00:21:24.619 06:39:17 -- common/autotest_common.sh@930 -- # kill -0 93263 00:21:24.619 06:39:17 -- common/autotest_common.sh@931 -- # uname 00:21:24.619 06:39:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.619 06:39:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93263 00:21:24.619 killing process with pid 93263 00:21:24.619 06:39:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:24.619 06:39:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:24.619 06:39:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93263' 00:21:24.619 06:39:17 -- common/autotest_common.sh@945 -- # kill 93263 00:21:24.619 06:39:17 -- common/autotest_common.sh@950 -- # wait 93263 00:21:25.996 06:39:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:25.996 06:39:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:25.996 06:39:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:25.996 06:39:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.996 06:39:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:25.996 06:39:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.996 06:39:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.996 06:39:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.996 06:39:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:25.996 00:21:25.996 real 0m51.149s 00:21:25.996 user 3m13.700s 00:21:25.996 sys 0m10.806s 00:21:25.996 06:39:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:25.996 06:39:18 -- common/autotest_common.sh@10 -- # set +x 00:21:25.996 ************************************ 00:21:25.996 END TEST nvmf_perf 00:21:25.996 ************************************ 00:21:26.255 06:39:18 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:26.255 06:39:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:26.255 06:39:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:26.255 06:39:18 -- common/autotest_common.sh@10 -- # set +x 00:21:26.255 ************************************ 00:21:26.255 START TEST nvmf_fio_host 00:21:26.255 ************************************ 00:21:26.255 06:39:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:26.255 * Looking for test storage... 00:21:26.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:26.255 06:39:18 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.255 06:39:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.255 06:39:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.255 06:39:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.255 06:39:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.255 06:39:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.255 06:39:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.255 06:39:18 -- paths/export.sh@5 -- # export PATH 00:21:26.255 06:39:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.255 06:39:18 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:26.255 06:39:18 -- nvmf/common.sh@7 -- # uname -s 00:21:26.255 06:39:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.255 06:39:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.255 06:39:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.255 06:39:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.255 06:39:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.255 06:39:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.255 06:39:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.255 06:39:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.255 06:39:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.255 06:39:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.255 06:39:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:21:26.255 06:39:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:21:26.255 06:39:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.255 06:39:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.255 06:39:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:26.255 06:39:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.255 06:39:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.255 06:39:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.255 06:39:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.255 06:39:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.255 06:39:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.255 06:39:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.255 06:39:18 -- paths/export.sh@5 -- # export PATH 00:21:26.256 06:39:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.256 06:39:18 -- nvmf/common.sh@46 -- # : 0 00:21:26.256 06:39:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:26.256 06:39:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:26.256 06:39:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:26.256 06:39:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.256 06:39:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.256 06:39:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:26.256 06:39:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:26.256 06:39:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:26.256 06:39:18 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.256 06:39:18 -- host/fio.sh@14 -- # nvmftestinit 00:21:26.256 06:39:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:26.256 06:39:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.256 06:39:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:26.256 06:39:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:26.256 06:39:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:26.256 06:39:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.256 06:39:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.256 06:39:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.256 06:39:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:26.256 06:39:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:26.256 06:39:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:26.256 06:39:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:26.256 06:39:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:26.256 06:39:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:26.256 06:39:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.256 06:39:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.256 06:39:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:26.256 06:39:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:26.256 06:39:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:26.256 06:39:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:26.256 06:39:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:26.256 06:39:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.256 06:39:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:26.256 06:39:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:26.256 06:39:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:26.256 06:39:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:26.256 06:39:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:26.256 06:39:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:26.256 Cannot find device "nvmf_tgt_br" 00:21:26.256 06:39:18 -- nvmf/common.sh@154 -- # true 00:21:26.256 06:39:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:26.256 Cannot find device "nvmf_tgt_br2" 00:21:26.256 06:39:18 -- nvmf/common.sh@155 -- # true 00:21:26.256 06:39:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:26.256 06:39:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:26.256 Cannot find device "nvmf_tgt_br" 00:21:26.256 06:39:18 -- nvmf/common.sh@157 -- # true 00:21:26.256 06:39:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:26.256 Cannot find device "nvmf_tgt_br2" 00:21:26.256 06:39:18 -- nvmf/common.sh@158 -- # true 00:21:26.256 06:39:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:26.256 06:39:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:26.517 06:39:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:26.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.517 06:39:18 -- nvmf/common.sh@161 -- # true 00:21:26.517 06:39:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:26.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.517 06:39:18 -- nvmf/common.sh@162 -- # true 00:21:26.517 06:39:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:26.517 06:39:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:26.517 06:39:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:26.517 06:39:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:26.517 06:39:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:26.517 06:39:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:26.517 06:39:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:26.517 06:39:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:26.517 06:39:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:26.517 06:39:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:26.517 06:39:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:26.517 06:39:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:26.517 06:39:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:26.517 06:39:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:26.517 06:39:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:26.517 06:39:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:26.517 06:39:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:26.517 06:39:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:26.517 06:39:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:26.517 06:39:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:26.517 06:39:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:26.517 06:39:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:26.517 06:39:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:26.517 06:39:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:26.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:26.517 00:21:26.517 --- 10.0.0.2 ping statistics --- 00:21:26.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.517 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:26.517 06:39:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:26.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:26.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:26.517 00:21:26.517 --- 10.0.0.3 ping statistics --- 00:21:26.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.517 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:26.517 06:39:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:26.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:26.517 00:21:26.517 --- 10.0.0.1 ping statistics --- 00:21:26.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.517 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:26.517 06:39:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.517 06:39:19 -- nvmf/common.sh@421 -- # return 0 00:21:26.517 06:39:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:26.517 06:39:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.517 06:39:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:26.517 06:39:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:26.517 06:39:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.517 06:39:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:26.517 06:39:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:26.517 06:39:19 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:26.517 06:39:19 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:26.517 06:39:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:26.517 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:21:26.517 06:39:19 -- host/fio.sh@24 -- # nvmfpid=94236 00:21:26.517 06:39:19 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:26.517 06:39:19 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.517 06:39:19 -- host/fio.sh@28 -- # waitforlisten 94236 00:21:26.517 06:39:19 -- common/autotest_common.sh@819 -- # '[' -z 94236 ']' 00:21:26.517 06:39:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.517 06:39:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:26.517 06:39:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.517 06:39:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:26.517 06:39:19 -- common/autotest_common.sh@10 -- # set +x 00:21:26.775 [2024-10-04 06:39:19.229059] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:21:26.775 [2024-10-04 06:39:19.229179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.775 [2024-10-04 06:39:19.367031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.775 [2024-10-04 06:39:19.444176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:26.775 [2024-10-04 06:39:19.444331] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.775 [2024-10-04 06:39:19.444343] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.775 [2024-10-04 06:39:19.444352] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.775 [2024-10-04 06:39:19.444516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.775 [2024-10-04 06:39:19.445108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.775 [2024-10-04 06:39:19.445257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.775 [2024-10-04 06:39:19.445266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.730 06:39:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:27.730 06:39:20 -- common/autotest_common.sh@852 -- # return 0 00:21:27.730 06:39:20 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:27.989 [2024-10-04 06:39:20.463793] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.989 06:39:20 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:27.989 06:39:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:27.989 06:39:20 -- common/autotest_common.sh@10 -- # set +x 00:21:27.989 06:39:20 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:28.247 Malloc1 00:21:28.247 06:39:20 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.506 06:39:21 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:28.763 06:39:21 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.020 [2024-10-04 06:39:21.475586] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.020 06:39:21 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:29.277 06:39:21 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:29.277 06:39:21 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:29.277 06:39:21 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:29.277 06:39:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:29.277 06:39:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:29.277 06:39:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:29.277 06:39:21 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.277 06:39:21 -- common/autotest_common.sh@1320 -- # shift 00:21:29.277 06:39:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:29.277 06:39:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:29.277 06:39:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:29.277 06:39:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:29.277 06:39:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:29.277 06:39:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:29.277 06:39:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:29.277 06:39:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:29.277 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:29.277 fio-3.35 00:21:29.277 Starting 1 thread 00:21:31.806 00:21:31.806 test: (groupid=0, jobs=1): err= 0: pid=94364: Fri Oct 4 06:39:24 2024 00:21:31.806 read: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.5MiB/2005msec) 00:21:31.806 slat (nsec): min=1777, max=445152, avg=2319.65, stdev=3954.28 00:21:31.806 clat (usec): min=3404, max=11192, avg=6432.70, stdev=522.06 00:21:31.806 lat (usec): min=3464, max=11194, avg=6435.02, stdev=521.95 00:21:31.806 clat percentiles (usec): 00:21:31.806 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 5997], 00:21:31.806 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:21:31.806 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7308], 00:21:31.806 | 99.00th=[ 7767], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[10683], 00:21:31.806 | 99.99th=[11076] 00:21:31.806 bw ( KiB/s): min=40688, max=42760, per=99.91%, avg=42084.00, stdev=958.28, samples=4 00:21:31.806 iops : min=10172, max=10690, avg=10521.00, stdev=239.57, samples=4 00:21:31.806 write: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.4MiB/2005msec); 0 zone resets 00:21:31.806 slat (nsec): min=1903, max=366003, avg=2418.59, stdev=2874.63 00:21:31.806 clat (usec): min=2647, max=10153, avg=5670.01, stdev=430.39 00:21:31.806 lat (usec): min=2660, max=10155, avg=5672.43, stdev=430.37 00:21:31.806 clat percentiles (usec): 00:21:31.806 | 1.00th=[ 4752], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:21:31.806 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:21:31.806 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6128], 95.00th=[ 6259], 00:21:31.806 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 8586], 99.95th=[ 9765], 00:21:31.806 | 99.99th=[10159] 00:21:31.806 bw ( KiB/s): min=41176, max=42560, per=99.99%, avg=42098.00, stdev=631.96, samples=4 00:21:31.806 iops : min=10294, max=10640, avg=10524.50, stdev=157.99, samples=4 00:21:31.806 lat (msec) : 4=0.07%, 10=99.87%, 20=0.06% 00:21:31.806 cpu : usr=67.76%, sys=23.30%, ctx=7, majf=0, minf=5 00:21:31.806 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:31.806 issued rwts: total=21114,21104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.806 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:31.806 00:21:31.806 Run status group 0 (all jobs): 00:21:31.806 READ: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.5MiB (86.5MB), run=2005-2005msec 00:21:31.806 WRITE: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.4MiB (86.4MB), run=2005-2005msec 00:21:31.806 06:39:24 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:31.806 06:39:24 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:31.806 06:39:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:31.806 06:39:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:31.806 06:39:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:31.806 06:39:24 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.806 06:39:24 -- common/autotest_common.sh@1320 -- # shift 00:21:31.806 06:39:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:31.806 06:39:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.806 06:39:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.806 06:39:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:31.806 06:39:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:31.806 06:39:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:31.807 06:39:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:31.807 06:39:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.807 06:39:24 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.807 06:39:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:31.807 06:39:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:31.807 06:39:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:31.807 06:39:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:31.807 06:39:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:31.807 06:39:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:31.807 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:31.807 fio-3.35 00:21:31.807 Starting 1 thread 00:21:34.336 00:21:34.336 test: (groupid=0, jobs=1): err= 0: pid=94413: Fri Oct 4 06:39:26 2024 00:21:34.336 read: IOPS=8461, BW=132MiB/s (139MB/s)(265MiB/2004msec) 00:21:34.336 slat (usec): min=2, max=131, avg= 3.56, stdev= 2.53 00:21:34.336 clat (usec): min=2062, max=19078, avg=8931.74, stdev=2160.60 00:21:34.336 lat (usec): min=2066, max=19083, avg=8935.30, stdev=2160.78 00:21:34.336 clat percentiles (usec): 00:21:34.336 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6980], 00:21:34.336 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:21:34.336 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11469], 95.00th=[12518], 00:21:34.336 | 99.00th=[14746], 99.50th=[15664], 99.90th=[17433], 99.95th=[18482], 00:21:34.336 | 99.99th=[19006] 00:21:34.336 bw ( KiB/s): min=61664, max=79936, per=52.01%, avg=70416.00, stdev=8348.85, samples=4 00:21:34.336 iops : min= 3854, max= 4996, avg=4401.00, stdev=521.80, samples=4 00:21:34.336 write: IOPS=5099, BW=79.7MiB/s (83.6MB/s)(144MiB/1808msec); 0 zone resets 00:21:34.336 slat (usec): min=29, max=199, avg=35.70, stdev= 9.26 00:21:34.336 clat (usec): min=3881, max=18062, avg=10665.40, stdev=1795.15 00:21:34.336 lat (usec): min=3912, max=18108, avg=10701.09, stdev=1796.73 00:21:34.336 clat percentiles (usec): 00:21:34.336 | 1.00th=[ 6718], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9241], 00:21:34.336 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:21:34.336 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12911], 95.00th=[13960], 00:21:34.336 | 99.00th=[15664], 99.50th=[16057], 99.90th=[17695], 99.95th=[17957], 00:21:34.336 | 99.99th=[17957] 00:21:34.337 bw ( KiB/s): min=63328, max=82144, per=89.73%, avg=73216.00, stdev=8460.03, samples=4 00:21:34.337 iops : min= 3958, max= 5134, avg=4576.00, stdev=528.75, samples=4 00:21:34.337 lat (msec) : 4=0.27%, 10=57.03%, 20=42.71% 00:21:34.337 cpu : usr=67.70%, sys=19.87%, ctx=4, majf=0, minf=1 00:21:34.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:34.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:34.337 issued rwts: total=16957,9220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.337 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:34.337 00:21:34.337 Run status group 0 (all jobs): 00:21:34.337 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=265MiB (278MB), run=2004-2004msec 00:21:34.337 WRITE: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=144MiB (151MB), run=1808-1808msec 00:21:34.337 06:39:26 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.337 06:39:26 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:34.337 06:39:26 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:34.337 06:39:26 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:34.337 06:39:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:34.337 06:39:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:21:34.337 06:39:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:34.337 06:39:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:34.337 06:39:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:34.595 06:39:27 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:34.595 06:39:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:34.595 06:39:27 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:34.853 Nvme0n1 00:21:34.853 06:39:27 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:35.112 06:39:27 -- host/fio.sh@53 -- # ls_guid=15fd46de-b47d-4d4b-a0ef-6239c15eb8e1 00:21:35.112 06:39:27 -- host/fio.sh@54 -- # get_lvs_free_mb 15fd46de-b47d-4d4b-a0ef-6239c15eb8e1 00:21:35.112 06:39:27 -- common/autotest_common.sh@1343 -- # local lvs_uuid=15fd46de-b47d-4d4b-a0ef-6239c15eb8e1 00:21:35.112 06:39:27 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:35.112 06:39:27 -- common/autotest_common.sh@1345 -- # local fc 00:21:35.112 06:39:27 -- common/autotest_common.sh@1346 -- # local cs 00:21:35.112 06:39:27 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:35.370 06:39:27 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:35.370 { 00:21:35.370 "base_bdev": "Nvme0n1", 00:21:35.370 "block_size": 4096, 00:21:35.370 "cluster_size": 1073741824, 00:21:35.370 "free_clusters": 4, 00:21:35.370 "name": "lvs_0", 00:21:35.370 "total_data_clusters": 4, 00:21:35.370 "uuid": "15fd46de-b47d-4d4b-a0ef-6239c15eb8e1" 00:21:35.370 } 00:21:35.370 ]' 00:21:35.370 06:39:27 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="15fd46de-b47d-4d4b-a0ef-6239c15eb8e1") .free_clusters' 00:21:35.370 06:39:27 -- common/autotest_common.sh@1348 -- # fc=4 00:21:35.370 06:39:27 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="15fd46de-b47d-4d4b-a0ef-6239c15eb8e1") .cluster_size' 00:21:35.370 06:39:27 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:21:35.370 06:39:27 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:21:35.370 4096 00:21:35.370 06:39:27 -- common/autotest_common.sh@1353 -- # echo 4096 00:21:35.370 06:39:27 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:35.629 7cefbf39-1314-4980-94e6-2e9d931279fa 00:21:35.629 06:39:28 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:35.888 06:39:28 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:36.146 06:39:28 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:36.405 06:39:28 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.405 06:39:28 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.405 06:39:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:36.405 06:39:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.405 06:39:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:36.405 06:39:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:36.405 06:39:28 -- common/autotest_common.sh@1320 -- # shift 00:21:36.405 06:39:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:36.405 06:39:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.405 06:39:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:36.405 06:39:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:36.405 06:39:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:36.405 06:39:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:36.405 06:39:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:36.405 06:39:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.405 06:39:29 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:36.405 06:39:29 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:36.405 06:39:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:36.405 06:39:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:36.405 06:39:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:36.405 06:39:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:36.405 06:39:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:36.664 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:36.664 fio-3.35 00:21:36.664 Starting 1 thread 00:21:39.196 00:21:39.196 test: (groupid=0, jobs=1): err= 0: pid=94565: Fri Oct 4 06:39:31 2024 00:21:39.196 read: IOPS=5961, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2008msec) 00:21:39.196 slat (nsec): min=1808, max=407614, avg=2905.13, stdev=5060.48 00:21:39.196 clat (usec): min=4596, max=20778, avg=11342.59, stdev=1084.58 00:21:39.196 lat (usec): min=4606, max=20796, avg=11345.49, stdev=1084.31 00:21:39.196 clat percentiles (usec): 00:21:39.196 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:21:39.196 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:21:39.196 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042], 00:21:39.196 | 99.00th=[13960], 99.50th=[14615], 99.90th=[19530], 99.95th=[20579], 00:21:39.196 | 99.99th=[20841] 00:21:39.196 bw ( KiB/s): min=23184, max=24144, per=99.74%, avg=23782.00, stdev=430.39, samples=4 00:21:39.196 iops : min= 5796, max= 6036, avg=5945.50, stdev=107.60, samples=4 00:21:39.196 write: IOPS=5949, BW=23.2MiB/s (24.4MB/s)(46.7MiB/2008msec); 0 zone resets 00:21:39.196 slat (nsec): min=1890, max=367927, avg=3032.26, stdev=4330.77 00:21:39.196 clat (usec): min=2545, max=16901, avg=10077.39, stdev=903.88 00:21:39.196 lat (usec): min=2557, max=16903, avg=10080.42, stdev=903.69 00:21:39.196 clat percentiles (usec): 00:21:39.196 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:21:39.196 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:21:39.196 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:21:39.196 | 99.00th=[12125], 99.50th=[12256], 99.90th=[15270], 99.95th=[15664], 00:21:39.196 | 99.99th=[16909] 00:21:39.196 bw ( KiB/s): min=23488, max=24152, per=100.00%, avg=23798.00, stdev=273.52, samples=4 00:21:39.196 iops : min= 5872, max= 6038, avg=5949.50, stdev=68.38, samples=4 00:21:39.196 lat (msec) : 4=0.05%, 10=27.75%, 20=72.17%, 50=0.04% 00:21:39.196 cpu : usr=73.29%, sys=20.58%, ctx=2, majf=0, minf=5 00:21:39.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:39.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:39.196 issued rwts: total=11970,11947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:39.196 00:21:39.196 Run status group 0 (all jobs): 00:21:39.196 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.0MB), run=2008-2008msec 00:21:39.196 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (48.9MB), run=2008-2008msec 00:21:39.196 06:39:31 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:39.196 06:39:31 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:39.456 06:39:32 -- host/fio.sh@64 -- # ls_nested_guid=9202a868-8f08-4ed4-9800-1c3ba4a7525e 00:21:39.456 06:39:32 -- host/fio.sh@65 -- # get_lvs_free_mb 9202a868-8f08-4ed4-9800-1c3ba4a7525e 00:21:39.456 06:39:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=9202a868-8f08-4ed4-9800-1c3ba4a7525e 00:21:39.456 06:39:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:39.456 06:39:32 -- common/autotest_common.sh@1345 -- # local fc 00:21:39.456 06:39:32 -- common/autotest_common.sh@1346 -- # local cs 00:21:39.456 06:39:32 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:39.715 06:39:32 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:39.715 { 00:21:39.715 "base_bdev": "Nvme0n1", 00:21:39.715 "block_size": 4096, 00:21:39.715 "cluster_size": 1073741824, 00:21:39.715 "free_clusters": 0, 00:21:39.715 "name": "lvs_0", 00:21:39.715 "total_data_clusters": 4, 00:21:39.715 "uuid": "15fd46de-b47d-4d4b-a0ef-6239c15eb8e1" 00:21:39.715 }, 00:21:39.715 { 00:21:39.715 "base_bdev": "7cefbf39-1314-4980-94e6-2e9d931279fa", 00:21:39.715 "block_size": 4096, 00:21:39.715 "cluster_size": 4194304, 00:21:39.715 "free_clusters": 1022, 00:21:39.715 "name": "lvs_n_0", 00:21:39.715 "total_data_clusters": 1022, 00:21:39.715 "uuid": "9202a868-8f08-4ed4-9800-1c3ba4a7525e" 00:21:39.715 } 00:21:39.715 ]' 00:21:39.715 06:39:32 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="9202a868-8f08-4ed4-9800-1c3ba4a7525e") .free_clusters' 00:21:39.715 06:39:32 -- common/autotest_common.sh@1348 -- # fc=1022 00:21:39.715 06:39:32 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="9202a868-8f08-4ed4-9800-1c3ba4a7525e") .cluster_size' 00:21:39.974 06:39:32 -- common/autotest_common.sh@1349 -- # cs=4194304 00:21:39.974 06:39:32 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:21:39.974 4088 00:21:39.974 06:39:32 -- common/autotest_common.sh@1353 -- # echo 4088 00:21:39.974 06:39:32 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:39.974 9e79d709-f8ed-490a-98f9-7e003ac1e9b3 00:21:40.233 06:39:32 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:40.233 06:39:32 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:40.491 06:39:33 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:40.749 06:39:33 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:40.749 06:39:33 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:40.749 06:39:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:40.749 06:39:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:40.749 06:39:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:40.749 06:39:33 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:40.749 06:39:33 -- common/autotest_common.sh@1320 -- # shift 00:21:40.749 06:39:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:40.749 06:39:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:40.749 06:39:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:40.749 06:39:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:40.749 06:39:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:40.749 06:39:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:40.749 06:39:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:40.749 06:39:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:41.007 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:41.007 fio-3.35 00:21:41.007 Starting 1 thread 00:21:43.536 00:21:43.536 test: (groupid=0, jobs=1): err= 0: pid=94692: Fri Oct 4 06:39:35 2024 00:21:43.536 read: IOPS=5559, BW=21.7MiB/s (22.8MB/s)(44.5MiB/2050msec) 00:21:43.536 slat (nsec): min=1874, max=349903, avg=2855.11, stdev=4619.61 00:21:43.536 clat (usec): min=6088, max=61162, avg=12261.69, stdev=3495.41 00:21:43.536 lat (usec): min=6096, max=61165, avg=12264.54, stdev=3495.37 00:21:43.536 clat percentiles (usec): 00:21:43.536 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:21:43.536 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:21:43.536 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13566], 95.00th=[14091], 00:21:43.536 | 99.00th=[15533], 99.50th=[51119], 99.90th=[59507], 99.95th=[61080], 00:21:43.536 | 99.99th=[61080] 00:21:43.536 bw ( KiB/s): min=21832, max=23041, per=100.00%, avg=22652.25, stdev=557.96, samples=4 00:21:43.536 iops : min= 5458, max= 5760, avg=5663.00, stdev=139.43, samples=4 00:21:43.536 write: IOPS=5530, BW=21.6MiB/s (22.7MB/s)(44.3MiB/2050msec); 0 zone resets 00:21:43.536 slat (nsec): min=1999, max=257475, avg=2941.76, stdev=3446.25 00:21:43.536 clat (usec): min=3029, max=60615, avg=10726.12, stdev=3448.01 00:21:43.536 lat (usec): min=3039, max=60617, avg=10729.07, stdev=3447.99 00:21:43.536 clat percentiles (usec): 00:21:43.536 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:21:43.536 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:21:43.536 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:21:43.536 | 99.00th=[13435], 99.50th=[50594], 99.90th=[56886], 99.95th=[59507], 00:21:43.536 | 99.99th=[60556] 00:21:43.536 bw ( KiB/s): min=22272, max=22808, per=100.00%, avg=22538.75, stdev=239.26, samples=4 00:21:43.536 iops : min= 5568, max= 5702, avg=5634.50, stdev=59.94, samples=4 00:21:43.536 lat (msec) : 4=0.01%, 10=16.16%, 20=83.27%, 50=0.04%, 100=0.52% 00:21:43.536 cpu : usr=73.74%, sys=20.16%, ctx=11, majf=0, minf=5 00:21:43.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:43.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.536 issued rwts: total=11396,11338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.536 00:21:43.536 Run status group 0 (all jobs): 00:21:43.536 READ: bw=21.7MiB/s (22.8MB/s), 21.7MiB/s-21.7MiB/s (22.8MB/s-22.8MB/s), io=44.5MiB (46.7MB), run=2050-2050msec 00:21:43.536 WRITE: bw=21.6MiB/s (22.7MB/s), 21.6MiB/s-21.6MiB/s (22.7MB/s-22.7MB/s), io=44.3MiB (46.4MB), run=2050-2050msec 00:21:43.536 06:39:35 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:43.536 06:39:36 -- host/fio.sh@74 -- # sync 00:21:43.536 06:39:36 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:44.103 06:39:36 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:44.103 06:39:36 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:44.362 06:39:36 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:44.682 06:39:37 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:45.634 06:39:37 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:45.634 06:39:37 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:45.634 06:39:37 -- host/fio.sh@86 -- # nvmftestfini 00:21:45.634 06:39:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:45.634 06:39:37 -- nvmf/common.sh@116 -- # sync 00:21:45.634 06:39:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:45.634 06:39:37 -- nvmf/common.sh@119 -- # set +e 00:21:45.634 06:39:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:45.634 06:39:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:45.634 rmmod nvme_tcp 00:21:45.634 rmmod nvme_fabrics 00:21:45.634 rmmod nvme_keyring 00:21:45.634 06:39:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:45.634 06:39:38 -- nvmf/common.sh@123 -- # set -e 00:21:45.634 06:39:38 -- nvmf/common.sh@124 -- # return 0 00:21:45.634 06:39:38 -- nvmf/common.sh@477 -- # '[' -n 94236 ']' 00:21:45.634 06:39:38 -- nvmf/common.sh@478 -- # killprocess 94236 00:21:45.634 06:39:38 -- common/autotest_common.sh@926 -- # '[' -z 94236 ']' 00:21:45.634 06:39:38 -- common/autotest_common.sh@930 -- # kill -0 94236 00:21:45.634 06:39:38 -- common/autotest_common.sh@931 -- # uname 00:21:45.634 06:39:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:45.634 06:39:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94236 00:21:45.634 killing process with pid 94236 00:21:45.634 06:39:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:45.634 06:39:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:45.634 06:39:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94236' 00:21:45.634 06:39:38 -- common/autotest_common.sh@945 -- # kill 94236 00:21:45.634 06:39:38 -- common/autotest_common.sh@950 -- # wait 94236 00:21:45.634 06:39:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:45.634 06:39:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:45.634 06:39:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:45.634 06:39:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.634 06:39:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:45.634 06:39:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.634 06:39:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.634 06:39:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.893 06:39:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:45.893 00:21:45.893 real 0m19.640s 00:21:45.893 user 1m26.583s 00:21:45.893 sys 0m4.380s 00:21:45.893 06:39:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.893 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:21:45.893 ************************************ 00:21:45.893 END TEST nvmf_fio_host 00:21:45.893 ************************************ 00:21:45.893 06:39:38 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:45.893 06:39:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:45.893 06:39:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:45.893 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:21:45.893 ************************************ 00:21:45.893 START TEST nvmf_failover 00:21:45.893 ************************************ 00:21:45.893 06:39:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:45.893 * Looking for test storage... 00:21:45.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:45.893 06:39:38 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:45.893 06:39:38 -- nvmf/common.sh@7 -- # uname -s 00:21:45.893 06:39:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.893 06:39:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.893 06:39:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.893 06:39:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.893 06:39:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.893 06:39:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.894 06:39:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.894 06:39:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.894 06:39:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.894 06:39:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.894 06:39:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:21:45.894 06:39:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:21:45.894 06:39:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.894 06:39:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.894 06:39:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:45.894 06:39:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:45.894 06:39:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.894 06:39:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.894 06:39:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.894 06:39:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.894 06:39:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.894 06:39:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.894 06:39:38 -- paths/export.sh@5 -- # export PATH 00:21:45.894 06:39:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.894 06:39:38 -- nvmf/common.sh@46 -- # : 0 00:21:45.894 06:39:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:45.894 06:39:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:45.894 06:39:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:45.894 06:39:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.894 06:39:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.894 06:39:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:45.894 06:39:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:45.894 06:39:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:45.894 06:39:38 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.894 06:39:38 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.894 06:39:38 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:45.894 06:39:38 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.894 06:39:38 -- host/failover.sh@18 -- # nvmftestinit 00:21:45.894 06:39:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:45.894 06:39:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.894 06:39:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:45.894 06:39:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:45.894 06:39:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:45.894 06:39:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.894 06:39:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.894 06:39:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.894 06:39:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:45.894 06:39:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:45.894 06:39:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:45.894 06:39:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:45.894 06:39:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:45.894 06:39:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:45.894 06:39:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.894 06:39:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.894 06:39:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:45.894 06:39:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:45.894 06:39:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:45.894 06:39:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:45.894 06:39:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:45.894 06:39:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.894 06:39:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:45.894 06:39:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:45.894 06:39:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:45.894 06:39:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:45.894 06:39:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:45.894 06:39:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:45.894 Cannot find device "nvmf_tgt_br" 00:21:45.894 06:39:38 -- nvmf/common.sh@154 -- # true 00:21:45.894 06:39:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:45.894 Cannot find device "nvmf_tgt_br2" 00:21:45.894 06:39:38 -- nvmf/common.sh@155 -- # true 00:21:45.894 06:39:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:45.894 06:39:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:45.894 Cannot find device "nvmf_tgt_br" 00:21:45.894 06:39:38 -- nvmf/common.sh@157 -- # true 00:21:45.894 06:39:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:45.894 Cannot find device "nvmf_tgt_br2" 00:21:45.894 06:39:38 -- nvmf/common.sh@158 -- # true 00:21:45.894 06:39:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:46.153 06:39:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:46.153 06:39:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.153 06:39:38 -- nvmf/common.sh@161 -- # true 00:21:46.153 06:39:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.153 06:39:38 -- nvmf/common.sh@162 -- # true 00:21:46.153 06:39:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.153 06:39:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.153 06:39:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.153 06:39:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.153 06:39:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.153 06:39:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.153 06:39:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.153 06:39:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:46.153 06:39:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:46.153 06:39:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:46.153 06:39:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:46.153 06:39:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:46.153 06:39:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:46.153 06:39:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.153 06:39:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.153 06:39:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:46.153 06:39:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:46.153 06:39:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:46.153 06:39:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:46.153 06:39:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:46.153 06:39:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:46.153 06:39:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:46.153 06:39:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:46.153 06:39:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:46.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:46.153 00:21:46.153 --- 10.0.0.2 ping statistics --- 00:21:46.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.153 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:46.153 06:39:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:46.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:46.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:46.153 00:21:46.153 --- 10.0.0.3 ping statistics --- 00:21:46.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.153 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:46.153 06:39:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:46.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:46.153 00:21:46.153 --- 10.0.0.1 ping statistics --- 00:21:46.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.153 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:46.153 06:39:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.153 06:39:38 -- nvmf/common.sh@421 -- # return 0 00:21:46.153 06:39:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:46.153 06:39:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.153 06:39:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:46.153 06:39:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:46.153 06:39:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.153 06:39:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:46.153 06:39:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:46.412 06:39:38 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:46.412 06:39:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:46.412 06:39:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:46.412 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:21:46.412 06:39:38 -- nvmf/common.sh@469 -- # nvmfpid=94962 00:21:46.412 06:39:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:46.412 06:39:38 -- nvmf/common.sh@470 -- # waitforlisten 94962 00:21:46.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.412 06:39:38 -- common/autotest_common.sh@819 -- # '[' -z 94962 ']' 00:21:46.412 06:39:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.412 06:39:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.412 06:39:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.412 06:39:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.412 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:21:46.412 [2024-10-04 06:39:38.906129] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:21:46.412 [2024-10-04 06:39:38.906394] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.412 [2024-10-04 06:39:39.046275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:46.671 [2024-10-04 06:39:39.143155] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:46.671 [2024-10-04 06:39:39.143657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.671 [2024-10-04 06:39:39.143836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.671 [2024-10-04 06:39:39.143857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.671 [2024-10-04 06:39:39.143998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.671 [2024-10-04 06:39:39.144146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.671 [2024-10-04 06:39:39.144160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.240 06:39:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.240 06:39:39 -- common/autotest_common.sh@852 -- # return 0 00:21:47.240 06:39:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:47.240 06:39:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:47.240 06:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:47.500 06:39:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.500 06:39:39 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:47.500 [2024-10-04 06:39:40.141919] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.500 06:39:40 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:47.760 Malloc0 00:21:47.760 06:39:40 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.019 06:39:40 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.276 06:39:40 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.534 [2024-10-04 06:39:41.068173] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.534 06:39:41 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:48.792 [2024-10-04 06:39:41.300293] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:48.792 06:39:41 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:49.050 [2024-10-04 06:39:41.512449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:49.050 06:39:41 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:49.050 06:39:41 -- host/failover.sh@31 -- # bdevperf_pid=95068 00:21:49.050 06:39:41 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.050 06:39:41 -- host/failover.sh@34 -- # waitforlisten 95068 /var/tmp/bdevperf.sock 00:21:49.050 06:39:41 -- common/autotest_common.sh@819 -- # '[' -z 95068 ']' 00:21:49.050 06:39:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.050 06:39:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.050 06:39:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.050 06:39:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.050 06:39:41 -- common/autotest_common.sh@10 -- # set +x 00:21:49.983 06:39:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.983 06:39:42 -- common/autotest_common.sh@852 -- # return 0 00:21:49.983 06:39:42 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.242 NVMe0n1 00:21:50.242 06:39:42 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.500 00:21:50.500 06:39:43 -- host/failover.sh@39 -- # run_test_pid=95120 00:21:50.500 06:39:43 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.500 06:39:43 -- host/failover.sh@41 -- # sleep 1 00:21:51.876 06:39:44 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.876 [2024-10-04 06:39:44.416704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.416999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.876 [2024-10-04 06:39:44.417490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 [2024-10-04 06:39:44.417563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ab0 is same with the state(5) to be set 00:21:51.877 06:39:44 -- host/failover.sh@45 -- # sleep 3 00:21:55.163 06:39:47 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.163 00:21:55.422 06:39:47 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:55.422 [2024-10-04 06:39:48.067432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.422 [2024-10-04 06:39:48.067962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.423 [2024-10-04 06:39:48.067970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.423 [2024-10-04 06:39:48.067978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c1920 is same with the state(5) to be set 00:21:55.423 06:39:48 -- host/failover.sh@50 -- # sleep 3 00:21:58.708 06:39:51 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.708 [2024-10-04 06:39:51.344310] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.708 06:39:51 -- host/failover.sh@55 -- # sleep 1 00:21:59.692 06:39:52 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:59.952 [2024-10-04 06:39:52.583759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.583994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 [2024-10-04 06:39:52.584224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3040 is same with the state(5) to be set 00:21:59.952 06:39:52 -- host/failover.sh@59 -- # wait 95120 00:22:06.523 0 00:22:06.523 06:39:58 -- host/failover.sh@61 -- # killprocess 95068 00:22:06.523 06:39:58 -- common/autotest_common.sh@926 -- # '[' -z 95068 ']' 00:22:06.523 06:39:58 -- common/autotest_common.sh@930 -- # kill -0 95068 00:22:06.523 06:39:58 -- common/autotest_common.sh@931 -- # uname 00:22:06.523 06:39:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:06.523 06:39:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95068 00:22:06.523 killing process with pid 95068 00:22:06.523 06:39:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:06.523 06:39:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:06.523 06:39:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95068' 00:22:06.523 06:39:58 -- common/autotest_common.sh@945 -- # kill 95068 00:22:06.523 06:39:58 -- common/autotest_common.sh@950 -- # wait 95068 00:22:06.523 06:39:58 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:06.523 [2024-10-04 06:39:41.569107] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:06.523 [2024-10-04 06:39:41.569191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95068 ] 00:22:06.523 [2024-10-04 06:39:41.701942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.523 [2024-10-04 06:39:41.779347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.523 Running I/O for 15 seconds... 00:22:06.523 [2024-10-04 06:39:44.418125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.523 [2024-10-04 06:39:44.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.523 [2024-10-04 06:39:44.418734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.418748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.418764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.418777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.418793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.418807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.418840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.418855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.418871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.418886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.418902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.418917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.418958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.418974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.418989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.419933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.419977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.419993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.420006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.420020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.420033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.420048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.524 [2024-10-04 06:39:44.420060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.420075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.524 [2024-10-04 06:39:44.420088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.524 [2024-10-04 06:39:44.420103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.420942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.420968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.420990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.421003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.421030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.421057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.421082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.421109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.525 [2024-10-04 06:39:44.421135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.421161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.421222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.525 [2024-10-04 06:39:44.421247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.525 [2024-10-04 06:39:44.421260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.526 [2024-10-04 06:39:44.421681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.421973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.421989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.422002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.422031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.422058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.422084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.422118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:44.422150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2293d40 is same with the state(5) to be set 00:22:06.526 [2024-10-04 06:39:44.422181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.526 [2024-10-04 06:39:44.422191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.526 [2024-10-04 06:39:44.422218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2344 len:8 PRP1 0x0 PRP2 0x0 00:22:06.526 [2024-10-04 06:39:44.422229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422310] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2293d40 was disconnected and freed. reset controller. 00:22:06.526 [2024-10-04 06:39:44.422334] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:06.526 [2024-10-04 06:39:44.422407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.526 [2024-10-04 06:39:44.422427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.526 [2024-10-04 06:39:44.422455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.526 [2024-10-04 06:39:44.422479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.526 [2024-10-04 06:39:44.422503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.526 [2024-10-04 06:39:44.422516] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.526 [2024-10-04 06:39:44.424999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.526 [2024-10-04 06:39:44.425034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2261940 (9): Bad file descriptor 00:22:06.526 [2024-10-04 06:39:44.456732] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.526 [2024-10-04 06:39:48.068087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.526 [2024-10-04 06:39:48.068139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.068987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.068999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.069013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.069028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.069042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.069054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.069068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.069080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.069094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.069144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.069173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.527 [2024-10-04 06:39:48.069186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.527 [2024-10-04 06:39:48.069215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.069436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.069691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.069735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.069763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.069974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.069986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.070092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.070121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.070210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.528 [2024-10-04 06:39:48.070431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.528 [2024-10-04 06:39:48.070470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.528 [2024-10-04 06:39:48.070482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.070677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.070703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.070755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.070889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.070915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.070942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.070977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.070990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.529 [2024-10-04 06:39:48.071669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.529 [2024-10-04 06:39:48.071723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.529 [2024-10-04 06:39:48.071737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.530 [2024-10-04 06:39:48.071750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.530 [2024-10-04 06:39:48.071782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.530 [2024-10-04 06:39:48.071810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.530 [2024-10-04 06:39:48.071837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.530 [2024-10-04 06:39:48.071863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.071901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.071934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.071960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.071974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.071986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.072013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.072040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.072066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:48.072093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226dd90 is same with the state(5) to be set 00:22:06.530 [2024-10-04 06:39:48.072121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.530 [2024-10-04 06:39:48.072138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.530 [2024-10-04 06:39:48.072169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21400 len:8 PRP1 0x0 PRP2 0x0 00:22:06.530 [2024-10-04 06:39:48.072182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072237] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x226dd90 was disconnected and freed. reset controller. 00:22:06.530 [2024-10-04 06:39:48.072255] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:06.530 [2024-10-04 06:39:48.072307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.530 [2024-10-04 06:39:48.072328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.530 [2024-10-04 06:39:48.072355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.530 [2024-10-04 06:39:48.072380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.530 [2024-10-04 06:39:48.072405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:48.072417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.530 [2024-10-04 06:39:48.072463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2261940 (9): Bad file descriptor 00:22:06.530 [2024-10-04 06:39:48.074834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.530 [2024-10-04 06:39:48.104527] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.530 [2024-10-04 06:39:52.584342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.530 [2024-10-04 06:39:52.584983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.530 [2024-10-04 06:39:52.584995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.585960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.585985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.585999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.586013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.586028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.531 [2024-10-04 06:39:52.586047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.586061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.586073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.586087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.586100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.586113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.531 [2024-10-04 06:39:52.586125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.531 [2024-10-04 06:39:52.586139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.532 [2024-10-04 06:39:52.586315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.532 [2024-10-04 06:39:52.586340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.532 [2024-10-04 06:39:52.586365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.532 [2024-10-04 06:39:52.586397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.532 [2024-10-04 06:39:52.586944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.586971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.586985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.587006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.587043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.532 [2024-10-04 06:39:52.587056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.587070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.532 [2024-10-04 06:39:52.587082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.587095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.532 [2024-10-04 06:39:52.587114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.532 [2024-10-04 06:39:52.587129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.533 [2024-10-04 06:39:52.587193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.533 [2024-10-04 06:39:52.587219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:06.533 [2024-10-04 06:39:52.587839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.587968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.587992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.588018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.588044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.533 [2024-10-04 06:39:52.588079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2294d50 is same with the state(5) to be set 00:22:06.533 [2024-10-04 06:39:52.588109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:06.533 [2024-10-04 06:39:52.588118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:06.533 [2024-10-04 06:39:52.588128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27472 len:8 PRP1 0x0 PRP2 0x0 00:22:06.533 [2024-10-04 06:39:52.588141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588206] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2294d50 was disconnected and freed. reset controller. 00:22:06.533 [2024-10-04 06:39:52.588231] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:06.533 [2024-10-04 06:39:52.588285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.533 [2024-10-04 06:39:52.588314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.533 [2024-10-04 06:39:52.588351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.533 [2024-10-04 06:39:52.588376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.533 [2024-10-04 06:39:52.588401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.533 [2024-10-04 06:39:52.588414] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.533 [2024-10-04 06:39:52.590299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.534 [2024-10-04 06:39:52.590337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2261940 (9): Bad file descriptor 00:22:06.534 [2024-10-04 06:39:52.609273] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.534 00:22:06.534 Latency(us) 00:22:06.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.534 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:06.534 Verification LBA range: start 0x0 length 0x4000 00:22:06.534 NVMe0n1 : 15.01 13594.83 53.10 284.47 0.00 9206.13 692.60 17992.61 00:22:06.534 =================================================================================================================== 00:22:06.534 Total : 13594.83 53.10 284.47 0.00 9206.13 692.60 17992.61 00:22:06.534 Received shutdown signal, test time was about 15.000000 seconds 00:22:06.534 00:22:06.534 Latency(us) 00:22:06.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.534 =================================================================================================================== 00:22:06.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.534 06:39:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:06.534 06:39:58 -- host/failover.sh@65 -- # count=3 00:22:06.534 06:39:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:06.534 06:39:58 -- host/failover.sh@73 -- # bdevperf_pid=95322 00:22:06.534 06:39:58 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:06.534 06:39:58 -- host/failover.sh@75 -- # waitforlisten 95322 /var/tmp/bdevperf.sock 00:22:06.534 06:39:58 -- common/autotest_common.sh@819 -- # '[' -z 95322 ']' 00:22:06.534 06:39:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.534 06:39:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:06.534 06:39:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.534 06:39:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:06.534 06:39:58 -- common/autotest_common.sh@10 -- # set +x 00:22:07.101 06:39:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:07.101 06:39:59 -- common/autotest_common.sh@852 -- # return 0 00:22:07.101 06:39:59 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:07.359 [2024-10-04 06:39:59.863479] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:07.359 06:39:59 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:07.618 [2024-10-04 06:40:00.095857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:07.618 06:40:00 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.876 NVMe0n1 00:22:07.876 06:40:00 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.134 00:22:08.134 06:40:00 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:08.392 00:22:08.650 06:40:01 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.650 06:40:01 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:08.908 06:40:01 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.166 06:40:01 -- host/failover.sh@87 -- # sleep 3 00:22:12.463 06:40:04 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.463 06:40:04 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:12.463 06:40:04 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.463 06:40:04 -- host/failover.sh@90 -- # run_test_pid=95461 00:22:12.463 06:40:04 -- host/failover.sh@92 -- # wait 95461 00:22:13.398 0 00:22:13.398 06:40:05 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:13.398 [2024-10-04 06:39:58.648647] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:13.399 [2024-10-04 06:39:58.648750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95322 ] 00:22:13.399 [2024-10-04 06:39:58.779073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.399 [2024-10-04 06:39:58.847393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.399 [2024-10-04 06:40:01.586068] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:13.399 [2024-10-04 06:40:01.586160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.399 [2024-10-04 06:40:01.586185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.399 [2024-10-04 06:40:01.586200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.399 [2024-10-04 06:40:01.586212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.399 [2024-10-04 06:40:01.586227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.399 [2024-10-04 06:40:01.586239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.399 [2024-10-04 06:40:01.586251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.399 [2024-10-04 06:40:01.586263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.399 [2024-10-04 06:40:01.586275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:13.399 [2024-10-04 06:40:01.586312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:13.399 [2024-10-04 06:40:01.586348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f7940 (9): Bad file descriptor 00:22:13.399 [2024-10-04 06:40:01.589012] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:13.399 Running I/O for 1 seconds... 00:22:13.399 00:22:13.399 Latency(us) 00:22:13.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.399 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:13.399 Verification LBA range: start 0x0 length 0x4000 00:22:13.399 NVMe0n1 : 1.01 14766.60 57.68 0.00 0.00 8631.00 1288.38 14537.08 00:22:13.399 =================================================================================================================== 00:22:13.399 Total : 14766.60 57.68 0.00 0.00 8631.00 1288.38 14537.08 00:22:13.399 06:40:05 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.399 06:40:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:13.657 06:40:06 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:13.916 06:40:06 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.916 06:40:06 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:14.174 06:40:06 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.433 06:40:07 -- host/failover.sh@101 -- # sleep 3 00:22:17.718 06:40:10 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:17.718 06:40:10 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:17.718 06:40:10 -- host/failover.sh@108 -- # killprocess 95322 00:22:17.718 06:40:10 -- common/autotest_common.sh@926 -- # '[' -z 95322 ']' 00:22:17.718 06:40:10 -- common/autotest_common.sh@930 -- # kill -0 95322 00:22:17.718 06:40:10 -- common/autotest_common.sh@931 -- # uname 00:22:17.718 06:40:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.718 06:40:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95322 00:22:17.718 killing process with pid 95322 00:22:17.718 06:40:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:17.718 06:40:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:17.718 06:40:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95322' 00:22:17.718 06:40:10 -- common/autotest_common.sh@945 -- # kill 95322 00:22:17.718 06:40:10 -- common/autotest_common.sh@950 -- # wait 95322 00:22:17.997 06:40:10 -- host/failover.sh@110 -- # sync 00:22:17.997 06:40:10 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.261 06:40:10 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:18.261 06:40:10 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:18.261 06:40:10 -- host/failover.sh@116 -- # nvmftestfini 00:22:18.261 06:40:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:18.261 06:40:10 -- nvmf/common.sh@116 -- # sync 00:22:18.261 06:40:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:18.262 06:40:10 -- nvmf/common.sh@119 -- # set +e 00:22:18.262 06:40:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:18.262 06:40:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:18.262 rmmod nvme_tcp 00:22:18.262 rmmod nvme_fabrics 00:22:18.520 rmmod nvme_keyring 00:22:18.520 06:40:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:18.520 06:40:10 -- nvmf/common.sh@123 -- # set -e 00:22:18.520 06:40:10 -- nvmf/common.sh@124 -- # return 0 00:22:18.520 06:40:10 -- nvmf/common.sh@477 -- # '[' -n 94962 ']' 00:22:18.520 06:40:10 -- nvmf/common.sh@478 -- # killprocess 94962 00:22:18.520 06:40:10 -- common/autotest_common.sh@926 -- # '[' -z 94962 ']' 00:22:18.520 06:40:10 -- common/autotest_common.sh@930 -- # kill -0 94962 00:22:18.520 06:40:10 -- common/autotest_common.sh@931 -- # uname 00:22:18.520 06:40:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:18.520 06:40:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94962 00:22:18.520 killing process with pid 94962 00:22:18.520 06:40:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:18.520 06:40:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:18.520 06:40:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94962' 00:22:18.520 06:40:10 -- common/autotest_common.sh@945 -- # kill 94962 00:22:18.520 06:40:10 -- common/autotest_common.sh@950 -- # wait 94962 00:22:18.778 06:40:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:18.778 06:40:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:18.778 06:40:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:18.778 06:40:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.778 06:40:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:18.778 06:40:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.778 06:40:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.778 06:40:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.778 06:40:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:18.778 ************************************ 00:22:18.778 END TEST nvmf_failover 00:22:18.778 ************************************ 00:22:18.778 00:22:18.778 real 0m32.928s 00:22:18.778 user 2m7.554s 00:22:18.778 sys 0m5.092s 00:22:18.778 06:40:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.778 06:40:11 -- common/autotest_common.sh@10 -- # set +x 00:22:18.778 06:40:11 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:18.778 06:40:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:18.778 06:40:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:18.778 06:40:11 -- common/autotest_common.sh@10 -- # set +x 00:22:18.778 ************************************ 00:22:18.778 START TEST nvmf_discovery 00:22:18.778 ************************************ 00:22:18.778 06:40:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:18.778 * Looking for test storage... 00:22:18.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:18.778 06:40:11 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.037 06:40:11 -- nvmf/common.sh@7 -- # uname -s 00:22:19.037 06:40:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.037 06:40:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.037 06:40:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.037 06:40:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.037 06:40:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.037 06:40:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.037 06:40:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.037 06:40:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.037 06:40:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.037 06:40:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.037 06:40:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:22:19.037 06:40:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:22:19.037 06:40:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.037 06:40:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.037 06:40:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.037 06:40:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.037 06:40:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.037 06:40:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.037 06:40:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.037 06:40:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.037 06:40:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.037 06:40:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.037 06:40:11 -- paths/export.sh@5 -- # export PATH 00:22:19.037 06:40:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.037 06:40:11 -- nvmf/common.sh@46 -- # : 0 00:22:19.037 06:40:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:19.037 06:40:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:19.037 06:40:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:19.037 06:40:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.037 06:40:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.037 06:40:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:19.037 06:40:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:19.037 06:40:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:19.037 06:40:11 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:19.037 06:40:11 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:19.037 06:40:11 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:19.037 06:40:11 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:19.037 06:40:11 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:19.037 06:40:11 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:19.037 06:40:11 -- host/discovery.sh@25 -- # nvmftestinit 00:22:19.037 06:40:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:19.037 06:40:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.037 06:40:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:19.037 06:40:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:19.037 06:40:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:19.037 06:40:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.037 06:40:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.037 06:40:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.037 06:40:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:19.037 06:40:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:19.037 06:40:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:19.037 06:40:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:19.037 06:40:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:19.037 06:40:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:19.037 06:40:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.037 06:40:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.037 06:40:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:19.037 06:40:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:19.037 06:40:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:19.037 06:40:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:19.037 06:40:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:19.037 06:40:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.037 06:40:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:19.038 06:40:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:19.038 06:40:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:19.038 06:40:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:19.038 06:40:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:19.038 06:40:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:19.038 Cannot find device "nvmf_tgt_br" 00:22:19.038 06:40:11 -- nvmf/common.sh@154 -- # true 00:22:19.038 06:40:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.038 Cannot find device "nvmf_tgt_br2" 00:22:19.038 06:40:11 -- nvmf/common.sh@155 -- # true 00:22:19.038 06:40:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:19.038 06:40:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:19.038 Cannot find device "nvmf_tgt_br" 00:22:19.038 06:40:11 -- nvmf/common.sh@157 -- # true 00:22:19.038 06:40:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:19.038 Cannot find device "nvmf_tgt_br2" 00:22:19.038 06:40:11 -- nvmf/common.sh@158 -- # true 00:22:19.038 06:40:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:19.038 06:40:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:19.038 06:40:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.038 06:40:11 -- nvmf/common.sh@161 -- # true 00:22:19.038 06:40:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.038 06:40:11 -- nvmf/common.sh@162 -- # true 00:22:19.038 06:40:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.038 06:40:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.038 06:40:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.038 06:40:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.038 06:40:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.038 06:40:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.038 06:40:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.038 06:40:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:19.038 06:40:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:19.038 06:40:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:19.038 06:40:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:19.038 06:40:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:19.038 06:40:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:19.296 06:40:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.296 06:40:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.296 06:40:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.296 06:40:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:19.296 06:40:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:19.296 06:40:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.296 06:40:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:19.296 06:40:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:19.296 06:40:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:19.296 06:40:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:19.296 06:40:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:19.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:22:19.296 00:22:19.296 --- 10.0.0.2 ping statistics --- 00:22:19.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.296 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:19.296 06:40:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:19.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:19.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:19.296 00:22:19.296 --- 10.0.0.3 ping statistics --- 00:22:19.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.296 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:19.296 06:40:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:19.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:19.296 00:22:19.296 --- 10.0.0.1 ping statistics --- 00:22:19.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.296 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:19.296 06:40:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.296 06:40:11 -- nvmf/common.sh@421 -- # return 0 00:22:19.296 06:40:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:19.296 06:40:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.296 06:40:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:19.296 06:40:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:19.296 06:40:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.296 06:40:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:19.296 06:40:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:19.296 06:40:11 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:19.296 06:40:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:19.296 06:40:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:19.296 06:40:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.296 06:40:11 -- nvmf/common.sh@469 -- # nvmfpid=95770 00:22:19.296 06:40:11 -- nvmf/common.sh@470 -- # waitforlisten 95770 00:22:19.296 06:40:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.296 06:40:11 -- common/autotest_common.sh@819 -- # '[' -z 95770 ']' 00:22:19.296 06:40:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.296 06:40:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.296 06:40:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.296 06:40:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.296 06:40:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.297 [2024-10-04 06:40:11.899184] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:19.297 [2024-10-04 06:40:11.899272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.555 [2024-10-04 06:40:12.035468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.555 [2024-10-04 06:40:12.114917] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:19.555 [2024-10-04 06:40:12.115108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.555 [2024-10-04 06:40:12.115128] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.555 [2024-10-04 06:40:12.115141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.555 [2024-10-04 06:40:12.115180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.490 06:40:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.490 06:40:12 -- common/autotest_common.sh@852 -- # return 0 00:22:20.490 06:40:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:20.490 06:40:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:20.490 06:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.490 06:40:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.490 06:40:12 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.490 06:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.490 06:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.490 [2024-10-04 06:40:12.912754] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.490 06:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.490 06:40:12 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:20.490 06:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.490 06:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.490 [2024-10-04 06:40:12.920937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:20.490 06:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.490 06:40:12 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:20.490 06:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.490 06:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.490 null0 00:22:20.490 06:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.490 06:40:12 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:20.490 06:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.490 06:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.490 null1 00:22:20.490 06:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.490 06:40:12 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:20.490 06:40:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.490 06:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.490 06:40:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.490 06:40:12 -- host/discovery.sh@45 -- # hostpid=95820 00:22:20.490 06:40:12 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:20.490 06:40:12 -- host/discovery.sh@46 -- # waitforlisten 95820 /tmp/host.sock 00:22:20.490 06:40:12 -- common/autotest_common.sh@819 -- # '[' -z 95820 ']' 00:22:20.490 06:40:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:20.490 06:40:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:20.490 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:20.490 06:40:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:20.490 06:40:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:20.490 06:40:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.490 [2024-10-04 06:40:13.005469] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:20.490 [2024-10-04 06:40:13.005580] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95820 ] 00:22:20.490 [2024-10-04 06:40:13.141198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.749 [2024-10-04 06:40:13.205846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:20.749 [2024-10-04 06:40:13.206055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.689 06:40:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:21.689 06:40:14 -- common/autotest_common.sh@852 -- # return 0 00:22:21.689 06:40:14 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.689 06:40:14 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:21.689 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.689 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.689 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.689 06:40:14 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:21.689 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.689 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.689 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@72 -- # notify_id=0 00:22:21.690 06:40:14 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # sort 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # xargs 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:21.690 06:40:14 -- host/discovery.sh@79 -- # get_bdev_list 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # xargs 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # sort 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:21.690 06:40:14 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # sort 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # xargs 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:21.690 06:40:14 -- host/discovery.sh@83 -- # get_bdev_list 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # sort 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # xargs 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:21.690 06:40:14 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # sort 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- host/discovery.sh@59 -- # xargs 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.690 06:40:14 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:21.690 06:40:14 -- host/discovery.sh@87 -- # get_bdev_list 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.690 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.690 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # sort 00:22:21.690 06:40:14 -- host/discovery.sh@55 -- # xargs 00:22:21.690 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.948 06:40:14 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:21.948 06:40:14 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:21.948 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.948 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.948 [2024-10-04 06:40:14.385139] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.948 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.948 06:40:14 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:21.948 06:40:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.948 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.948 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.948 06:40:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.948 06:40:14 -- host/discovery.sh@59 -- # xargs 00:22:21.948 06:40:14 -- host/discovery.sh@59 -- # sort 00:22:21.948 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.948 06:40:14 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:21.948 06:40:14 -- host/discovery.sh@93 -- # get_bdev_list 00:22:21.948 06:40:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.948 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.948 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.948 06:40:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.948 06:40:14 -- host/discovery.sh@55 -- # xargs 00:22:21.948 06:40:14 -- host/discovery.sh@55 -- # sort 00:22:21.948 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.948 06:40:14 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:21.948 06:40:14 -- host/discovery.sh@94 -- # get_notification_count 00:22:21.948 06:40:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:21.948 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.948 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.948 06:40:14 -- host/discovery.sh@74 -- # jq '. | length' 00:22:21.948 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.948 06:40:14 -- host/discovery.sh@74 -- # notification_count=0 00:22:21.948 06:40:14 -- host/discovery.sh@75 -- # notify_id=0 00:22:21.948 06:40:14 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:21.948 06:40:14 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:21.948 06:40:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.948 06:40:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.948 06:40:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.948 06:40:14 -- host/discovery.sh@100 -- # sleep 1 00:22:22.514 [2024-10-04 06:40:15.045745] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:22.514 [2024-10-04 06:40:15.045804] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:22.514 [2024-10-04 06:40:15.045823] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:22.514 [2024-10-04 06:40:15.133883] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:22.773 [2024-10-04 06:40:15.197009] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:22.773 [2024-10-04 06:40:15.197038] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:23.031 06:40:15 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:23.031 06:40:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.031 06:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.031 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.031 06:40:15 -- host/discovery.sh@59 -- # sort 00:22:23.031 06:40:15 -- host/discovery.sh@59 -- # xargs 00:22:23.031 06:40:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:23.031 06:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.031 06:40:15 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.031 06:40:15 -- host/discovery.sh@102 -- # get_bdev_list 00:22:23.031 06:40:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.031 06:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.031 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.032 06:40:15 -- host/discovery.sh@55 -- # sort 00:22:23.032 06:40:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.032 06:40:15 -- host/discovery.sh@55 -- # xargs 00:22:23.032 06:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.032 06:40:15 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:23.032 06:40:15 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:23.032 06:40:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:23.032 06:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.032 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.032 06:40:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:23.032 06:40:15 -- host/discovery.sh@63 -- # sort -n 00:22:23.032 06:40:15 -- host/discovery.sh@63 -- # xargs 00:22:23.032 06:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.290 06:40:15 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:23.290 06:40:15 -- host/discovery.sh@104 -- # get_notification_count 00:22:23.290 06:40:15 -- host/discovery.sh@74 -- # jq '. | length' 00:22:23.290 06:40:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:23.290 06:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.290 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.290 06:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.290 06:40:15 -- host/discovery.sh@74 -- # notification_count=1 00:22:23.290 06:40:15 -- host/discovery.sh@75 -- # notify_id=1 00:22:23.290 06:40:15 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:23.290 06:40:15 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:23.290 06:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.290 06:40:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.290 06:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.290 06:40:15 -- host/discovery.sh@109 -- # sleep 1 00:22:24.226 06:40:16 -- host/discovery.sh@110 -- # get_bdev_list 00:22:24.226 06:40:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.226 06:40:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.226 06:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.226 06:40:16 -- host/discovery.sh@55 -- # sort 00:22:24.226 06:40:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.226 06:40:16 -- host/discovery.sh@55 -- # xargs 00:22:24.226 06:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.226 06:40:16 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:24.226 06:40:16 -- host/discovery.sh@111 -- # get_notification_count 00:22:24.226 06:40:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:24.226 06:40:16 -- host/discovery.sh@74 -- # jq '. | length' 00:22:24.226 06:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.226 06:40:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.226 06:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.485 06:40:16 -- host/discovery.sh@74 -- # notification_count=1 00:22:24.485 06:40:16 -- host/discovery.sh@75 -- # notify_id=2 00:22:24.485 06:40:16 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:24.485 06:40:16 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:24.485 06:40:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.485 06:40:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.485 [2024-10-04 06:40:16.922146] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:24.486 [2024-10-04 06:40:16.923075] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:24.486 [2024-10-04 06:40:16.923118] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.486 06:40:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.486 06:40:16 -- host/discovery.sh@117 -- # sleep 1 00:22:24.486 [2024-10-04 06:40:17.009115] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:24.486 [2024-10-04 06:40:17.068475] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:24.486 [2024-10-04 06:40:17.068523] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:24.486 [2024-10-04 06:40:17.068532] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:25.422 06:40:17 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:25.422 06:40:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.422 06:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.422 06:40:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.422 06:40:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:25.422 06:40:17 -- host/discovery.sh@59 -- # xargs 00:22:25.422 06:40:17 -- host/discovery.sh@59 -- # sort 00:22:25.422 06:40:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.422 06:40:17 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.422 06:40:17 -- host/discovery.sh@119 -- # get_bdev_list 00:22:25.422 06:40:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.422 06:40:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.422 06:40:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.422 06:40:17 -- host/discovery.sh@55 -- # xargs 00:22:25.422 06:40:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.422 06:40:17 -- host/discovery.sh@55 -- # sort 00:22:25.422 06:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.422 06:40:18 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:25.422 06:40:18 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:25.422 06:40:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:25.422 06:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.422 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.422 06:40:18 -- host/discovery.sh@63 -- # sort -n 00:22:25.422 06:40:18 -- host/discovery.sh@63 -- # xargs 00:22:25.422 06:40:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:25.422 06:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.682 06:40:18 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:25.682 06:40:18 -- host/discovery.sh@121 -- # get_notification_count 00:22:25.682 06:40:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:25.682 06:40:18 -- host/discovery.sh@74 -- # jq '. | length' 00:22:25.682 06:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.682 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.682 06:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.682 06:40:18 -- host/discovery.sh@74 -- # notification_count=0 00:22:25.682 06:40:18 -- host/discovery.sh@75 -- # notify_id=2 00:22:25.682 06:40:18 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:25.682 06:40:18 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:25.682 06:40:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.682 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.682 [2024-10-04 06:40:18.163297] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:25.682 [2024-10-04 06:40:18.163365] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:25.682 [2024-10-04 06:40:18.164555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.682 [2024-10-04 06:40:18.164595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.682 [2024-10-04 06:40:18.164610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.682 [2024-10-04 06:40:18.164619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.682 [2024-10-04 06:40:18.164629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.682 [2024-10-04 06:40:18.164638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.682 [2024-10-04 06:40:18.164648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.682 [2024-10-04 06:40:18.164657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.682 [2024-10-04 06:40:18.164667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.682 06:40:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.682 06:40:18 -- host/discovery.sh@127 -- # sleep 1 00:22:25.682 [2024-10-04 06:40:18.174510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.682 [2024-10-04 06:40:18.184531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:25.682 [2024-10-04 06:40:18.184659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.184712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.184732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294cf0 with addr=10.0.0.2, port=4420 00:22:25.682 [2024-10-04 06:40:18.184743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.682 [2024-10-04 06:40:18.184761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.682 [2024-10-04 06:40:18.184777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:25.682 [2024-10-04 06:40:18.184787] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:25.682 [2024-10-04 06:40:18.184808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:25.682 [2024-10-04 06:40:18.184839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.682 [2024-10-04 06:40:18.194597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:25.682 [2024-10-04 06:40:18.194695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.194747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.194767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294cf0 with addr=10.0.0.2, port=4420 00:22:25.682 [2024-10-04 06:40:18.194778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.682 [2024-10-04 06:40:18.194795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.682 [2024-10-04 06:40:18.194810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:25.682 [2024-10-04 06:40:18.194834] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:25.682 [2024-10-04 06:40:18.194861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:25.682 [2024-10-04 06:40:18.194878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.682 [2024-10-04 06:40:18.204659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:25.682 [2024-10-04 06:40:18.204747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.204796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.204827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294cf0 with addr=10.0.0.2, port=4420 00:22:25.682 [2024-10-04 06:40:18.204859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.682 [2024-10-04 06:40:18.204876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.682 [2024-10-04 06:40:18.204891] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:25.682 [2024-10-04 06:40:18.204901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:25.682 [2024-10-04 06:40:18.204910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:25.682 [2024-10-04 06:40:18.204925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.682 [2024-10-04 06:40:18.214718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:25.682 [2024-10-04 06:40:18.214814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.214899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.214920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294cf0 with addr=10.0.0.2, port=4420 00:22:25.682 [2024-10-04 06:40:18.214931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.682 [2024-10-04 06:40:18.214949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.682 [2024-10-04 06:40:18.214965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:25.682 [2024-10-04 06:40:18.214975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:25.682 [2024-10-04 06:40:18.214985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:25.682 [2024-10-04 06:40:18.215023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.682 [2024-10-04 06:40:18.224779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:25.682 [2024-10-04 06:40:18.224896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.224955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.224975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294cf0 with addr=10.0.0.2, port=4420 00:22:25.682 [2024-10-04 06:40:18.224985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.682 [2024-10-04 06:40:18.225002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.682 [2024-10-04 06:40:18.225018] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:25.682 [2024-10-04 06:40:18.225028] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:25.682 [2024-10-04 06:40:18.225037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:25.682 [2024-10-04 06:40:18.225052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.682 [2024-10-04 06:40:18.234848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:25.682 [2024-10-04 06:40:18.234937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.234986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.682 [2024-10-04 06:40:18.235017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294cf0 with addr=10.0.0.2, port=4420 00:22:25.683 [2024-10-04 06:40:18.235029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.683 [2024-10-04 06:40:18.235045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.683 [2024-10-04 06:40:18.235061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:25.683 [2024-10-04 06:40:18.235071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:25.683 [2024-10-04 06:40:18.235080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:25.683 [2024-10-04 06:40:18.235094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.683 [2024-10-04 06:40:18.244904] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:25.683 [2024-10-04 06:40:18.244992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.683 [2024-10-04 06:40:18.245043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.683 [2024-10-04 06:40:18.245062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294cf0 with addr=10.0.0.2, port=4420 00:22:25.683 [2024-10-04 06:40:18.245073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294cf0 is same with the state(5) to be set 00:22:25.683 [2024-10-04 06:40:18.245089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1294cf0 (9): Bad file descriptor 00:22:25.683 [2024-10-04 06:40:18.245104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:25.683 [2024-10-04 06:40:18.245113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:25.683 [2024-10-04 06:40:18.245123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:25.683 [2024-10-04 06:40:18.245137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.683 [2024-10-04 06:40:18.249305] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:25.683 [2024-10-04 06:40:18.249337] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:26.619 06:40:19 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:26.619 06:40:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.619 06:40:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.619 06:40:19 -- host/discovery.sh@59 -- # sort 00:22:26.619 06:40:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.619 06:40:19 -- host/discovery.sh@59 -- # xargs 00:22:26.619 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.619 06:40:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.619 06:40:19 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.619 06:40:19 -- host/discovery.sh@129 -- # get_bdev_list 00:22:26.619 06:40:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.619 06:40:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.619 06:40:19 -- host/discovery.sh@55 -- # xargs 00:22:26.619 06:40:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.619 06:40:19 -- host/discovery.sh@55 -- # sort 00:22:26.619 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.619 06:40:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.619 06:40:19 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:26.619 06:40:19 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:26.619 06:40:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.619 06:40:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.619 06:40:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.619 06:40:19 -- host/discovery.sh@63 -- # sort -n 00:22:26.619 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.619 06:40:19 -- host/discovery.sh@63 -- # xargs 00:22:26.877 06:40:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.877 06:40:19 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:26.877 06:40:19 -- host/discovery.sh@131 -- # get_notification_count 00:22:26.877 06:40:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:26.877 06:40:19 -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.877 06:40:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.877 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.877 06:40:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.877 06:40:19 -- host/discovery.sh@74 -- # notification_count=0 00:22:26.877 06:40:19 -- host/discovery.sh@75 -- # notify_id=2 00:22:26.877 06:40:19 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:26.877 06:40:19 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:26.877 06:40:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.877 06:40:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.877 06:40:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.877 06:40:19 -- host/discovery.sh@135 -- # sleep 1 00:22:27.812 06:40:20 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:27.812 06:40:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.812 06:40:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:27.812 06:40:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.812 06:40:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.812 06:40:20 -- host/discovery.sh@59 -- # sort 00:22:27.812 06:40:20 -- host/discovery.sh@59 -- # xargs 00:22:27.812 06:40:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.812 06:40:20 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:27.812 06:40:20 -- host/discovery.sh@137 -- # get_bdev_list 00:22:27.812 06:40:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.812 06:40:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:27.812 06:40:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.812 06:40:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.812 06:40:20 -- host/discovery.sh@55 -- # sort 00:22:27.812 06:40:20 -- host/discovery.sh@55 -- # xargs 00:22:27.812 06:40:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.071 06:40:20 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:28.071 06:40:20 -- host/discovery.sh@138 -- # get_notification_count 00:22:28.071 06:40:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:28.071 06:40:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.071 06:40:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.071 06:40:20 -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.071 06:40:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:28.071 06:40:20 -- host/discovery.sh@74 -- # notification_count=2 00:22:28.071 06:40:20 -- host/discovery.sh@75 -- # notify_id=4 00:22:28.071 06:40:20 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:28.071 06:40:20 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:28.071 06:40:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:28.071 06:40:20 -- common/autotest_common.sh@10 -- # set +x 00:22:29.006 [2024-10-04 06:40:21.596594] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:29.006 [2024-10-04 06:40:21.596623] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:29.006 [2024-10-04 06:40:21.596643] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:29.006 [2024-10-04 06:40:21.682709] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:29.264 [2024-10-04 06:40:21.742255] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:29.264 [2024-10-04 06:40:21.742312] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:29.264 06:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.265 06:40:21 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.265 06:40:21 -- common/autotest_common.sh@640 -- # local es=0 00:22:29.265 06:40:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.265 06:40:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:29.265 06:40:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:29.265 06:40:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:29.265 06:40:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:29.265 06:40:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.265 06:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.265 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.265 2024/10/04 06:40:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:29.265 request: 00:22:29.265 { 00:22:29.265 "method": "bdev_nvme_start_discovery", 00:22:29.265 "params": { 00:22:29.265 "name": "nvme", 00:22:29.265 "trtype": "tcp", 00:22:29.265 "traddr": "10.0.0.2", 00:22:29.265 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:29.265 "adrfam": "ipv4", 00:22:29.265 "trsvcid": "8009", 00:22:29.265 "wait_for_attach": true 00:22:29.265 } 00:22:29.265 } 00:22:29.265 Got JSON-RPC error response 00:22:29.265 GoRPCClient: error on JSON-RPC call 00:22:29.265 06:40:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:29.265 06:40:21 -- common/autotest_common.sh@643 -- # es=1 00:22:29.265 06:40:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:29.265 06:40:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:29.265 06:40:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:29.265 06:40:21 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:29.265 06:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.265 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # sort 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # xargs 00:22:29.265 06:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.265 06:40:21 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:29.265 06:40:21 -- host/discovery.sh@147 -- # get_bdev_list 00:22:29.265 06:40:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.265 06:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.265 06:40:21 -- host/discovery.sh@55 -- # sort 00:22:29.265 06:40:21 -- host/discovery.sh@55 -- # xargs 00:22:29.265 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.265 06:40:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.265 06:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.265 06:40:21 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:29.265 06:40:21 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.265 06:40:21 -- common/autotest_common.sh@640 -- # local es=0 00:22:29.265 06:40:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.265 06:40:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:29.265 06:40:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:29.265 06:40:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:29.265 06:40:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:29.265 06:40:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.265 06:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.265 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.265 2024/10/04 06:40:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:29.265 request: 00:22:29.265 { 00:22:29.265 "method": "bdev_nvme_start_discovery", 00:22:29.265 "params": { 00:22:29.265 "name": "nvme_second", 00:22:29.265 "trtype": "tcp", 00:22:29.265 "traddr": "10.0.0.2", 00:22:29.265 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:29.265 "adrfam": "ipv4", 00:22:29.265 "trsvcid": "8009", 00:22:29.265 "wait_for_attach": true 00:22:29.265 } 00:22:29.265 } 00:22:29.265 Got JSON-RPC error response 00:22:29.265 GoRPCClient: error on JSON-RPC call 00:22:29.265 06:40:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:29.265 06:40:21 -- common/autotest_common.sh@643 -- # es=1 00:22:29.265 06:40:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:29.265 06:40:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:29.265 06:40:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:29.265 06:40:21 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:29.265 06:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.265 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # sort 00:22:29.265 06:40:21 -- host/discovery.sh@67 -- # xargs 00:22:29.265 06:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.524 06:40:21 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:29.524 06:40:21 -- host/discovery.sh@153 -- # get_bdev_list 00:22:29.524 06:40:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.524 06:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.524 06:40:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.524 06:40:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.524 06:40:21 -- host/discovery.sh@55 -- # sort 00:22:29.524 06:40:21 -- host/discovery.sh@55 -- # xargs 00:22:29.524 06:40:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.524 06:40:21 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:29.524 06:40:21 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.524 06:40:21 -- common/autotest_common.sh@640 -- # local es=0 00:22:29.524 06:40:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.524 06:40:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:29.524 06:40:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:29.524 06:40:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:29.524 06:40:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:29.524 06:40:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.524 06:40:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.524 06:40:22 -- common/autotest_common.sh@10 -- # set +x 00:22:30.463 [2024-10-04 06:40:23.003935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.463 [2024-10-04 06:40:23.004021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.463 [2024-10-04 06:40:23.004043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294300 with addr=10.0.0.2, port=8010 00:22:30.463 [2024-10-04 06:40:23.004058] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:30.463 [2024-10-04 06:40:23.004068] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:30.463 [2024-10-04 06:40:23.004078] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:31.400 [2024-10-04 06:40:24.003990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.400 [2024-10-04 06:40:24.004087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.400 [2024-10-04 06:40:24.004109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1294300 with addr=10.0.0.2, port=8010 00:22:31.400 [2024-10-04 06:40:24.004133] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:31.400 [2024-10-04 06:40:24.004144] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:31.400 [2024-10-04 06:40:24.004155] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:32.336 [2024-10-04 06:40:25.003861] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:32.336 2024/10/04 06:40:25 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:32.336 request: 00:22:32.336 { 00:22:32.336 "method": "bdev_nvme_start_discovery", 00:22:32.336 "params": { 00:22:32.336 "name": "nvme_second", 00:22:32.336 "trtype": "tcp", 00:22:32.336 "traddr": "10.0.0.2", 00:22:32.336 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:32.336 "adrfam": "ipv4", 00:22:32.336 "trsvcid": "8010", 00:22:32.336 "attach_timeout_ms": 3000 00:22:32.336 } 00:22:32.336 } 00:22:32.336 Got JSON-RPC error response 00:22:32.336 GoRPCClient: error on JSON-RPC call 00:22:32.336 06:40:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:32.336 06:40:25 -- common/autotest_common.sh@643 -- # es=1 00:22:32.336 06:40:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:32.336 06:40:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:32.336 06:40:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:32.336 06:40:25 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:32.336 06:40:25 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:32.336 06:40:25 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:32.336 06:40:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.336 06:40:25 -- host/discovery.sh@67 -- # sort 00:22:32.336 06:40:25 -- common/autotest_common.sh@10 -- # set +x 00:22:32.595 06:40:25 -- host/discovery.sh@67 -- # xargs 00:22:32.595 06:40:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.595 06:40:25 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:32.595 06:40:25 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:32.595 06:40:25 -- host/discovery.sh@162 -- # kill 95820 00:22:32.595 06:40:25 -- host/discovery.sh@163 -- # nvmftestfini 00:22:32.595 06:40:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:32.595 06:40:25 -- nvmf/common.sh@116 -- # sync 00:22:32.595 06:40:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:32.595 06:40:25 -- nvmf/common.sh@119 -- # set +e 00:22:32.595 06:40:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:32.595 06:40:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:32.595 rmmod nvme_tcp 00:22:32.595 rmmod nvme_fabrics 00:22:32.595 rmmod nvme_keyring 00:22:32.595 06:40:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:32.595 06:40:25 -- nvmf/common.sh@123 -- # set -e 00:22:32.595 06:40:25 -- nvmf/common.sh@124 -- # return 0 00:22:32.595 06:40:25 -- nvmf/common.sh@477 -- # '[' -n 95770 ']' 00:22:32.595 06:40:25 -- nvmf/common.sh@478 -- # killprocess 95770 00:22:32.595 06:40:25 -- common/autotest_common.sh@926 -- # '[' -z 95770 ']' 00:22:32.595 06:40:25 -- common/autotest_common.sh@930 -- # kill -0 95770 00:22:32.595 06:40:25 -- common/autotest_common.sh@931 -- # uname 00:22:32.595 06:40:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:32.595 06:40:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95770 00:22:32.595 killing process with pid 95770 00:22:32.595 06:40:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:32.595 06:40:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:32.595 06:40:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95770' 00:22:32.595 06:40:25 -- common/autotest_common.sh@945 -- # kill 95770 00:22:32.595 06:40:25 -- common/autotest_common.sh@950 -- # wait 95770 00:22:32.853 06:40:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:32.854 06:40:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:32.854 06:40:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:32.854 06:40:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.854 06:40:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:32.854 06:40:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.854 06:40:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.854 06:40:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.113 06:40:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:33.113 00:22:33.113 real 0m14.184s 00:22:33.113 user 0m27.767s 00:22:33.113 sys 0m1.715s 00:22:33.113 06:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.113 06:40:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.113 ************************************ 00:22:33.113 END TEST nvmf_discovery 00:22:33.113 ************************************ 00:22:33.113 06:40:25 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:33.113 06:40:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:33.113 06:40:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:33.113 06:40:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.113 ************************************ 00:22:33.113 START TEST nvmf_discovery_remove_ifc 00:22:33.113 ************************************ 00:22:33.113 06:40:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:33.113 * Looking for test storage... 00:22:33.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:33.113 06:40:25 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.113 06:40:25 -- nvmf/common.sh@7 -- # uname -s 00:22:33.113 06:40:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.113 06:40:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.113 06:40:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.113 06:40:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.113 06:40:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.113 06:40:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.113 06:40:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.113 06:40:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.113 06:40:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.113 06:40:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.114 06:40:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:22:33.114 06:40:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:22:33.114 06:40:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.114 06:40:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.114 06:40:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.114 06:40:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.114 06:40:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.114 06:40:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.114 06:40:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.114 06:40:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.114 06:40:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.114 06:40:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.114 06:40:25 -- paths/export.sh@5 -- # export PATH 00:22:33.114 06:40:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.114 06:40:25 -- nvmf/common.sh@46 -- # : 0 00:22:33.114 06:40:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:33.114 06:40:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:33.114 06:40:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:33.114 06:40:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.114 06:40:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.114 06:40:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:33.114 06:40:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:33.114 06:40:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:33.114 06:40:25 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:33.114 06:40:25 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:33.114 06:40:25 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:33.114 06:40:25 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:33.114 06:40:25 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:33.114 06:40:25 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:33.114 06:40:25 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:33.114 06:40:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:33.114 06:40:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.114 06:40:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:33.114 06:40:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:33.114 06:40:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:33.114 06:40:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.114 06:40:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.114 06:40:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.114 06:40:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:33.114 06:40:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:33.114 06:40:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:33.114 06:40:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:33.114 06:40:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:33.114 06:40:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:33.114 06:40:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.114 06:40:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.114 06:40:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:33.114 06:40:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:33.114 06:40:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.114 06:40:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.114 06:40:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.114 06:40:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.114 06:40:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.114 06:40:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.114 06:40:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.114 06:40:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.114 06:40:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:33.114 06:40:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:33.114 Cannot find device "nvmf_tgt_br" 00:22:33.114 06:40:25 -- nvmf/common.sh@154 -- # true 00:22:33.114 06:40:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.114 Cannot find device "nvmf_tgt_br2" 00:22:33.114 06:40:25 -- nvmf/common.sh@155 -- # true 00:22:33.114 06:40:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:33.114 06:40:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:33.114 Cannot find device "nvmf_tgt_br" 00:22:33.372 06:40:25 -- nvmf/common.sh@157 -- # true 00:22:33.372 06:40:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:33.372 Cannot find device "nvmf_tgt_br2" 00:22:33.372 06:40:25 -- nvmf/common.sh@158 -- # true 00:22:33.372 06:40:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:33.372 06:40:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:33.372 06:40:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.372 06:40:25 -- nvmf/common.sh@161 -- # true 00:22:33.372 06:40:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.372 06:40:25 -- nvmf/common.sh@162 -- # true 00:22:33.372 06:40:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.372 06:40:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.372 06:40:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.372 06:40:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.372 06:40:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.372 06:40:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.372 06:40:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.372 06:40:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:33.372 06:40:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:33.372 06:40:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:33.372 06:40:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:33.372 06:40:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:33.372 06:40:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:33.372 06:40:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.372 06:40:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.372 06:40:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.372 06:40:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:33.372 06:40:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:33.372 06:40:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.372 06:40:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:33.372 06:40:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:33.631 06:40:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:33.631 06:40:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:33.631 06:40:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:33.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:22:33.631 00:22:33.631 --- 10.0.0.2 ping statistics --- 00:22:33.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.631 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:33.631 06:40:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:33.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:33.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:22:33.631 00:22:33.631 --- 10.0.0.3 ping statistics --- 00:22:33.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.631 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:33.631 06:40:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:33.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:22:33.631 00:22:33.631 --- 10.0.0.1 ping statistics --- 00:22:33.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.631 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:22:33.631 06:40:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.631 06:40:26 -- nvmf/common.sh@421 -- # return 0 00:22:33.631 06:40:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:33.631 06:40:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.631 06:40:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:33.631 06:40:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:33.631 06:40:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.631 06:40:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:33.631 06:40:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:33.631 06:40:26 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:33.631 06:40:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:33.631 06:40:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:33.631 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 06:40:26 -- nvmf/common.sh@469 -- # nvmfpid=96327 00:22:33.631 06:40:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.631 06:40:26 -- nvmf/common.sh@470 -- # waitforlisten 96327 00:22:33.631 06:40:26 -- common/autotest_common.sh@819 -- # '[' -z 96327 ']' 00:22:33.631 06:40:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.631 06:40:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:33.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.631 06:40:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.631 06:40:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:33.631 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 [2024-10-04 06:40:26.149907] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:33.631 [2024-10-04 06:40:26.149997] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.631 [2024-10-04 06:40:26.283192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.889 [2024-10-04 06:40:26.361046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:33.889 [2024-10-04 06:40:26.361216] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.890 [2024-10-04 06:40:26.361228] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.890 [2024-10-04 06:40:26.361237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.890 [2024-10-04 06:40:26.361261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.869 06:40:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:34.869 06:40:27 -- common/autotest_common.sh@852 -- # return 0 00:22:34.869 06:40:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:34.869 06:40:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:34.869 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.869 06:40:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.869 06:40:27 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:34.869 06:40:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.869 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.869 [2024-10-04 06:40:27.233458] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.869 [2024-10-04 06:40:27.241611] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:34.869 null0 00:22:34.869 [2024-10-04 06:40:27.273483] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.869 06:40:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.869 06:40:27 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96377 00:22:34.869 06:40:27 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:34.869 06:40:27 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96377 /tmp/host.sock 00:22:34.869 06:40:27 -- common/autotest_common.sh@819 -- # '[' -z 96377 ']' 00:22:34.869 06:40:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:34.869 06:40:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:34.869 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:34.869 06:40:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:34.869 06:40:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:34.869 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.869 [2024-10-04 06:40:27.341849] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:34.869 [2024-10-04 06:40:27.341940] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96377 ] 00:22:34.869 [2024-10-04 06:40:27.476212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.133 [2024-10-04 06:40:27.567190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:35.133 [2024-10-04 06:40:27.567416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.701 06:40:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:35.701 06:40:28 -- common/autotest_common.sh@852 -- # return 0 00:22:35.701 06:40:28 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.701 06:40:28 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:35.701 06:40:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:35.701 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.701 06:40:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:35.701 06:40:28 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:35.701 06:40:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:35.701 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.959 06:40:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:35.959 06:40:28 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:35.959 06:40:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:35.960 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:22:36.896 [2024-10-04 06:40:29.486503] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.896 [2024-10-04 06:40:29.486553] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.896 [2024-10-04 06:40:29.486576] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.896 [2024-10-04 06:40:29.572739] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:37.155 [2024-10-04 06:40:29.628707] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:37.155 [2024-10-04 06:40:29.628777] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:37.155 [2024-10-04 06:40:29.628834] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:37.155 [2024-10-04 06:40:29.628857] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:37.155 [2024-10-04 06:40:29.628887] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:37.155 06:40:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.155 [2024-10-04 06:40:29.635020] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x64d6c0 was disconnected and freed. delete nvme_qpair. 00:22:37.155 06:40:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.155 06:40:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.155 06:40:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.155 06:40:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.155 06:40:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.155 06:40:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.155 06:40:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.092 06:40:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.092 06:40:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.092 06:40:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.092 06:40:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:38.092 06:40:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.092 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:22:38.092 06:40:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.351 06:40:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:38.351 06:40:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.351 06:40:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.287 06:40:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.287 06:40:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.287 06:40:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.287 06:40:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.287 06:40:31 -- common/autotest_common.sh@10 -- # set +x 00:22:39.287 06:40:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.287 06:40:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.287 06:40:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.287 06:40:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:39.287 06:40:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.223 06:40:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.223 06:40:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.223 06:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.223 06:40:32 -- common/autotest_common.sh@10 -- # set +x 00:22:40.223 06:40:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.223 06:40:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.223 06:40:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.223 06:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.481 06:40:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.481 06:40:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.418 06:40:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.418 06:40:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.418 06:40:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.418 06:40:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.418 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:22:41.418 06:40:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.418 06:40:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.418 06:40:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.418 06:40:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.418 06:40:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.358 06:40:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.358 06:40:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.358 06:40:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.358 06:40:35 -- common/autotest_common.sh@10 -- # set +x 00:22:42.358 06:40:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.358 06:40:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.358 06:40:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.358 06:40:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.617 06:40:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.617 [2024-10-04 06:40:35.056662] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:42.617 [2024-10-04 06:40:35.056743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.617 [2024-10-04 06:40:35.056761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.617 [2024-10-04 06:40:35.056775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.617 [2024-10-04 06:40:35.056785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.617 [2024-10-04 06:40:35.056796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.617 [2024-10-04 06:40:35.056805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.617 [2024-10-04 06:40:35.056861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.617 [2024-10-04 06:40:35.056876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.617 [2024-10-04 06:40:35.056887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.617 [2024-10-04 06:40:35.056897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.617 [2024-10-04 06:40:35.056906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6294b0 is same with the state(5) to be set 00:22:42.617 06:40:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.617 [2024-10-04 06:40:35.066657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6294b0 (9): Bad file descriptor 00:22:42.617 [2024-10-04 06:40:35.076681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:43.553 06:40:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.553 06:40:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.553 06:40:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.553 06:40:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.553 06:40:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.553 06:40:36 -- common/autotest_common.sh@10 -- # set +x 00:22:43.553 06:40:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.553 [2024-10-04 06:40:36.130935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:44.489 [2024-10-04 06:40:37.154947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:44.489 [2024-10-04 06:40:37.155069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6294b0 with addr=10.0.0.2, port=4420 00:22:44.489 [2024-10-04 06:40:37.155105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6294b0 is same with the state(5) to be set 00:22:44.489 [2024-10-04 06:40:37.155154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:44.489 [2024-10-04 06:40:37.155180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:44.489 [2024-10-04 06:40:37.155202] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:44.489 [2024-10-04 06:40:37.155225] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:44.489 [2024-10-04 06:40:37.156004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6294b0 (9): Bad file descriptor 00:22:44.489 [2024-10-04 06:40:37.156069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:44.489 [2024-10-04 06:40:37.156127] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:44.489 [2024-10-04 06:40:37.156197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.489 [2024-10-04 06:40:37.156242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.489 [2024-10-04 06:40:37.156270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.489 [2024-10-04 06:40:37.156304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.489 [2024-10-04 06:40:37.156328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.489 [2024-10-04 06:40:37.156351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.489 [2024-10-04 06:40:37.156374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.489 [2024-10-04 06:40:37.156396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.489 [2024-10-04 06:40:37.156420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.489 [2024-10-04 06:40:37.156449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.489 [2024-10-04 06:40:37.156471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:44.489 [2024-10-04 06:40:37.156535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6148f0 (9): Bad file descriptor 00:22:44.489 [2024-10-04 06:40:37.157537] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:44.489 [2024-10-04 06:40:37.157573] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:44.748 06:40:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.748 06:40:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:44.748 06:40:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.683 06:40:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.683 06:40:38 -- common/autotest_common.sh@10 -- # set +x 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.683 06:40:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.683 06:40:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.683 06:40:38 -- common/autotest_common.sh@10 -- # set +x 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.683 06:40:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:45.683 06:40:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.620 [2024-10-04 06:40:39.167859] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:46.620 [2024-10-04 06:40:39.167902] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:46.620 [2024-10-04 06:40:39.167921] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:46.620 [2024-10-04 06:40:39.253971] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:46.879 [2024-10-04 06:40:39.309166] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:46.879 [2024-10-04 06:40:39.309486] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:46.879 [2024-10-04 06:40:39.309527] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:46.879 [2024-10-04 06:40:39.309548] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:46.879 [2024-10-04 06:40:39.309558] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:46.879 [2024-10-04 06:40:39.316426] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x658330 was disconnected and freed. delete nvme_qpair. 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.879 06:40:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.879 06:40:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.879 06:40:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:46.879 06:40:39 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96377 00:22:46.879 06:40:39 -- common/autotest_common.sh@926 -- # '[' -z 96377 ']' 00:22:46.879 06:40:39 -- common/autotest_common.sh@930 -- # kill -0 96377 00:22:46.879 06:40:39 -- common/autotest_common.sh@931 -- # uname 00:22:46.879 06:40:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:46.879 06:40:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96377 00:22:46.879 killing process with pid 96377 00:22:46.879 06:40:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:46.879 06:40:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:46.879 06:40:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96377' 00:22:46.879 06:40:39 -- common/autotest_common.sh@945 -- # kill 96377 00:22:46.879 06:40:39 -- common/autotest_common.sh@950 -- # wait 96377 00:22:47.138 06:40:39 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:47.138 06:40:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:47.138 06:40:39 -- nvmf/common.sh@116 -- # sync 00:22:47.138 06:40:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:47.138 06:40:39 -- nvmf/common.sh@119 -- # set +e 00:22:47.138 06:40:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:47.138 06:40:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:47.138 rmmod nvme_tcp 00:22:47.138 rmmod nvme_fabrics 00:22:47.138 rmmod nvme_keyring 00:22:47.138 06:40:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:47.138 06:40:39 -- nvmf/common.sh@123 -- # set -e 00:22:47.138 06:40:39 -- nvmf/common.sh@124 -- # return 0 00:22:47.139 06:40:39 -- nvmf/common.sh@477 -- # '[' -n 96327 ']' 00:22:47.139 06:40:39 -- nvmf/common.sh@478 -- # killprocess 96327 00:22:47.139 06:40:39 -- common/autotest_common.sh@926 -- # '[' -z 96327 ']' 00:22:47.139 06:40:39 -- common/autotest_common.sh@930 -- # kill -0 96327 00:22:47.139 06:40:39 -- common/autotest_common.sh@931 -- # uname 00:22:47.139 06:40:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:47.139 06:40:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96327 00:22:47.397 killing process with pid 96327 00:22:47.397 06:40:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:47.397 06:40:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:47.397 06:40:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96327' 00:22:47.397 06:40:39 -- common/autotest_common.sh@945 -- # kill 96327 00:22:47.397 06:40:39 -- common/autotest_common.sh@950 -- # wait 96327 00:22:47.656 06:40:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:47.656 06:40:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:47.656 06:40:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:47.656 06:40:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.656 06:40:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:47.656 06:40:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.656 06:40:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.656 06:40:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.656 06:40:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:47.656 00:22:47.656 real 0m14.523s 00:22:47.656 user 0m24.873s 00:22:47.656 sys 0m1.626s 00:22:47.656 06:40:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.656 06:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.656 ************************************ 00:22:47.656 END TEST nvmf_discovery_remove_ifc 00:22:47.656 ************************************ 00:22:47.656 06:40:40 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:47.656 06:40:40 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:47.656 06:40:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:47.656 06:40:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:47.656 06:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.656 ************************************ 00:22:47.656 START TEST nvmf_digest 00:22:47.656 ************************************ 00:22:47.656 06:40:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:47.656 * Looking for test storage... 00:22:47.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:47.656 06:40:40 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:47.656 06:40:40 -- nvmf/common.sh@7 -- # uname -s 00:22:47.656 06:40:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.656 06:40:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.656 06:40:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.656 06:40:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.656 06:40:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.656 06:40:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.656 06:40:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.656 06:40:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.656 06:40:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.656 06:40:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.656 06:40:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:22:47.657 06:40:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:22:47.657 06:40:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.657 06:40:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.657 06:40:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:47.657 06:40:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:47.657 06:40:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.657 06:40:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.657 06:40:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.657 06:40:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.657 06:40:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.657 06:40:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.657 06:40:40 -- paths/export.sh@5 -- # export PATH 00:22:47.657 06:40:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.657 06:40:40 -- nvmf/common.sh@46 -- # : 0 00:22:47.657 06:40:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.657 06:40:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.657 06:40:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.657 06:40:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.657 06:40:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.657 06:40:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.657 06:40:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.657 06:40:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.657 06:40:40 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:47.657 06:40:40 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:47.657 06:40:40 -- host/digest.sh@16 -- # runtime=2 00:22:47.657 06:40:40 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:47.657 06:40:40 -- host/digest.sh@132 -- # nvmftestinit 00:22:47.657 06:40:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:47.657 06:40:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.657 06:40:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.657 06:40:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.657 06:40:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.657 06:40:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.657 06:40:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.657 06:40:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.657 06:40:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:47.657 06:40:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:47.657 06:40:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:47.657 06:40:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:47.657 06:40:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:47.657 06:40:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:47.657 06:40:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.657 06:40:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.657 06:40:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:47.657 06:40:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:47.657 06:40:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:47.657 06:40:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:47.657 06:40:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:47.657 06:40:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.657 06:40:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:47.657 06:40:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:47.657 06:40:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:47.657 06:40:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:47.657 06:40:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:47.657 06:40:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:47.916 Cannot find device "nvmf_tgt_br" 00:22:47.916 06:40:40 -- nvmf/common.sh@154 -- # true 00:22:47.916 06:40:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:47.916 Cannot find device "nvmf_tgt_br2" 00:22:47.916 06:40:40 -- nvmf/common.sh@155 -- # true 00:22:47.916 06:40:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:47.916 06:40:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:47.916 Cannot find device "nvmf_tgt_br" 00:22:47.916 06:40:40 -- nvmf/common.sh@157 -- # true 00:22:47.916 06:40:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:47.916 Cannot find device "nvmf_tgt_br2" 00:22:47.916 06:40:40 -- nvmf/common.sh@158 -- # true 00:22:47.916 06:40:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:47.916 06:40:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:47.916 06:40:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:47.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.916 06:40:40 -- nvmf/common.sh@161 -- # true 00:22:47.916 06:40:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:47.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.916 06:40:40 -- nvmf/common.sh@162 -- # true 00:22:47.916 06:40:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:47.916 06:40:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:47.916 06:40:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:47.916 06:40:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:47.916 06:40:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:47.916 06:40:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:47.916 06:40:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:47.916 06:40:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:47.916 06:40:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:47.916 06:40:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:47.916 06:40:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:47.916 06:40:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:47.916 06:40:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:47.916 06:40:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:47.916 06:40:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:47.916 06:40:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:47.916 06:40:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:47.916 06:40:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:47.916 06:40:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:48.174 06:40:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:48.174 06:40:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:48.174 06:40:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:48.174 06:40:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:48.174 06:40:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:48.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:48.174 00:22:48.174 --- 10.0.0.2 ping statistics --- 00:22:48.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.174 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:48.174 06:40:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:48.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:48.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:22:48.174 00:22:48.174 --- 10.0.0.3 ping statistics --- 00:22:48.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.174 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:48.174 06:40:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:48.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:48.174 00:22:48.174 --- 10.0.0.1 ping statistics --- 00:22:48.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.174 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:48.174 06:40:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.174 06:40:40 -- nvmf/common.sh@421 -- # return 0 00:22:48.174 06:40:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:48.174 06:40:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.174 06:40:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:48.174 06:40:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:48.174 06:40:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.174 06:40:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:48.174 06:40:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:48.174 06:40:40 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:48.174 06:40:40 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:48.174 06:40:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:48.174 06:40:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:48.174 06:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 ************************************ 00:22:48.174 START TEST nvmf_digest_clean 00:22:48.174 ************************************ 00:22:48.174 06:40:40 -- common/autotest_common.sh@1104 -- # run_digest 00:22:48.174 06:40:40 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:48.174 06:40:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:48.174 06:40:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:48.174 06:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 06:40:40 -- nvmf/common.sh@469 -- # nvmfpid=96790 00:22:48.174 06:40:40 -- nvmf/common.sh@470 -- # waitforlisten 96790 00:22:48.174 06:40:40 -- common/autotest_common.sh@819 -- # '[' -z 96790 ']' 00:22:48.174 06:40:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:48.174 06:40:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.174 06:40:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:48.174 06:40:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.174 06:40:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:48.174 06:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:48.174 [2024-10-04 06:40:40.750215] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:48.174 [2024-10-04 06:40:40.750349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.432 [2024-10-04 06:40:40.890614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.432 [2024-10-04 06:40:40.980489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:48.432 [2024-10-04 06:40:40.980673] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.432 [2024-10-04 06:40:40.980690] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.432 [2024-10-04 06:40:40.980701] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.432 [2024-10-04 06:40:40.980736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.997 06:40:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:48.997 06:40:41 -- common/autotest_common.sh@852 -- # return 0 00:22:48.997 06:40:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:48.997 06:40:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:48.997 06:40:41 -- common/autotest_common.sh@10 -- # set +x 00:22:48.997 06:40:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.997 06:40:41 -- host/digest.sh@120 -- # common_target_config 00:22:48.997 06:40:41 -- host/digest.sh@43 -- # rpc_cmd 00:22:48.997 06:40:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.997 06:40:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.255 null0 00:22:49.255 [2024-10-04 06:40:41.798284] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.255 [2024-10-04 06:40:41.822467] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.255 06:40:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.255 06:40:41 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:49.255 06:40:41 -- host/digest.sh@77 -- # local rw bs qd 00:22:49.255 06:40:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:49.255 06:40:41 -- host/digest.sh@80 -- # rw=randread 00:22:49.255 06:40:41 -- host/digest.sh@80 -- # bs=4096 00:22:49.255 06:40:41 -- host/digest.sh@80 -- # qd=128 00:22:49.255 06:40:41 -- host/digest.sh@82 -- # bperfpid=96840 00:22:49.255 06:40:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:49.255 06:40:41 -- host/digest.sh@83 -- # waitforlisten 96840 /var/tmp/bperf.sock 00:22:49.255 06:40:41 -- common/autotest_common.sh@819 -- # '[' -z 96840 ']' 00:22:49.255 06:40:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:49.255 06:40:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:49.255 06:40:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:49.255 06:40:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.255 06:40:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.255 [2024-10-04 06:40:41.909117] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:49.255 [2024-10-04 06:40:41.909249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96840 ] 00:22:49.513 [2024-10-04 06:40:42.057251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.513 [2024-10-04 06:40:42.139912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.447 06:40:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.447 06:40:42 -- common/autotest_common.sh@852 -- # return 0 00:22:50.447 06:40:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:50.447 06:40:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:50.447 06:40:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:50.705 06:40:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.705 06:40:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.270 nvme0n1 00:22:51.270 06:40:43 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:51.270 06:40:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:51.270 Running I/O for 2 seconds... 00:22:53.169 00:22:53.169 Latency(us) 00:22:53.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.169 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:53.169 nvme0n1 : 2.00 22811.55 89.11 0.00 0.00 5605.52 2383.13 11856.06 00:22:53.169 =================================================================================================================== 00:22:53.169 Total : 22811.55 89.11 0.00 0.00 5605.52 2383.13 11856.06 00:22:53.169 0 00:22:53.169 06:40:45 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:53.169 06:40:45 -- host/digest.sh@92 -- # get_accel_stats 00:22:53.169 06:40:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:53.169 06:40:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:53.169 06:40:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:53.169 | select(.opcode=="crc32c") 00:22:53.169 | "\(.module_name) \(.executed)"' 00:22:53.427 06:40:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:53.427 06:40:46 -- host/digest.sh@93 -- # exp_module=software 00:22:53.427 06:40:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:53.427 06:40:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:53.428 06:40:46 -- host/digest.sh@97 -- # killprocess 96840 00:22:53.428 06:40:46 -- common/autotest_common.sh@926 -- # '[' -z 96840 ']' 00:22:53.428 06:40:46 -- common/autotest_common.sh@930 -- # kill -0 96840 00:22:53.428 06:40:46 -- common/autotest_common.sh@931 -- # uname 00:22:53.428 06:40:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:53.428 06:40:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96840 00:22:53.428 06:40:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:53.428 killing process with pid 96840 00:22:53.428 06:40:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:53.428 06:40:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96840' 00:22:53.428 06:40:46 -- common/autotest_common.sh@945 -- # kill 96840 00:22:53.428 Received shutdown signal, test time was about 2.000000 seconds 00:22:53.428 00:22:53.428 Latency(us) 00:22:53.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.428 =================================================================================================================== 00:22:53.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.428 06:40:46 -- common/autotest_common.sh@950 -- # wait 96840 00:22:53.686 06:40:46 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:53.686 06:40:46 -- host/digest.sh@77 -- # local rw bs qd 00:22:53.686 06:40:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:53.686 06:40:46 -- host/digest.sh@80 -- # rw=randread 00:22:53.686 06:40:46 -- host/digest.sh@80 -- # bs=131072 00:22:53.686 06:40:46 -- host/digest.sh@80 -- # qd=16 00:22:53.686 06:40:46 -- host/digest.sh@82 -- # bperfpid=96935 00:22:53.686 06:40:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:53.686 06:40:46 -- host/digest.sh@83 -- # waitforlisten 96935 /var/tmp/bperf.sock 00:22:53.686 06:40:46 -- common/autotest_common.sh@819 -- # '[' -z 96935 ']' 00:22:53.686 06:40:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.686 06:40:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:53.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.686 06:40:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.686 06:40:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:53.686 06:40:46 -- common/autotest_common.sh@10 -- # set +x 00:22:53.944 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:53.944 Zero copy mechanism will not be used. 00:22:53.944 [2024-10-04 06:40:46.403686] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:53.944 [2024-10-04 06:40:46.403809] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96935 ] 00:22:53.944 [2024-10-04 06:40:46.536808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.944 [2024-10-04 06:40:46.620319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.881 06:40:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:54.881 06:40:47 -- common/autotest_common.sh@852 -- # return 0 00:22:54.881 06:40:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:54.881 06:40:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:54.881 06:40:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:55.140 06:40:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:55.140 06:40:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:55.399 nvme0n1 00:22:55.399 06:40:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:55.399 06:40:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:55.399 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:55.399 Zero copy mechanism will not be used. 00:22:55.399 Running I/O for 2 seconds... 00:22:57.973 00:22:57.974 Latency(us) 00:22:57.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.974 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:57.974 nvme0n1 : 2.00 10190.71 1273.84 0.00 0.00 1567.21 651.64 3306.59 00:22:57.974 =================================================================================================================== 00:22:57.974 Total : 10190.71 1273.84 0.00 0.00 1567.21 651.64 3306.59 00:22:57.974 0 00:22:57.974 06:40:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:57.974 06:40:50 -- host/digest.sh@92 -- # get_accel_stats 00:22:57.974 06:40:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:57.974 06:40:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:57.974 | select(.opcode=="crc32c") 00:22:57.974 | "\(.module_name) \(.executed)"' 00:22:57.974 06:40:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:57.974 06:40:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:57.974 06:40:50 -- host/digest.sh@93 -- # exp_module=software 00:22:57.974 06:40:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:57.974 06:40:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:57.974 06:40:50 -- host/digest.sh@97 -- # killprocess 96935 00:22:57.974 06:40:50 -- common/autotest_common.sh@926 -- # '[' -z 96935 ']' 00:22:57.974 06:40:50 -- common/autotest_common.sh@930 -- # kill -0 96935 00:22:57.974 06:40:50 -- common/autotest_common.sh@931 -- # uname 00:22:57.974 06:40:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:57.974 06:40:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96935 00:22:57.974 06:40:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:57.974 06:40:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:57.974 killing process with pid 96935 00:22:57.974 06:40:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96935' 00:22:57.974 Received shutdown signal, test time was about 2.000000 seconds 00:22:57.974 00:22:57.974 Latency(us) 00:22:57.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.974 =================================================================================================================== 00:22:57.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.974 06:40:50 -- common/autotest_common.sh@945 -- # kill 96935 00:22:57.974 06:40:50 -- common/autotest_common.sh@950 -- # wait 96935 00:22:57.974 06:40:50 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:57.974 06:40:50 -- host/digest.sh@77 -- # local rw bs qd 00:22:57.974 06:40:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:57.974 06:40:50 -- host/digest.sh@80 -- # rw=randwrite 00:22:57.974 06:40:50 -- host/digest.sh@80 -- # bs=4096 00:22:57.974 06:40:50 -- host/digest.sh@80 -- # qd=128 00:22:57.974 06:40:50 -- host/digest.sh@82 -- # bperfpid=97020 00:22:57.974 06:40:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:57.974 06:40:50 -- host/digest.sh@83 -- # waitforlisten 97020 /var/tmp/bperf.sock 00:22:57.974 06:40:50 -- common/autotest_common.sh@819 -- # '[' -z 97020 ']' 00:22:57.974 06:40:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:57.974 06:40:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:57.974 06:40:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:57.974 06:40:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.974 06:40:50 -- common/autotest_common.sh@10 -- # set +x 00:22:58.233 [2024-10-04 06:40:50.691575] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:22:58.233 [2024-10-04 06:40:50.691661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97020 ] 00:22:58.233 [2024-10-04 06:40:50.824169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.233 [2024-10-04 06:40:50.905036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.492 06:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:58.492 06:40:50 -- common/autotest_common.sh@852 -- # return 0 00:22:58.492 06:40:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:58.492 06:40:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:58.492 06:40:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:58.751 06:40:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.751 06:40:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.009 nvme0n1 00:22:59.010 06:40:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:59.010 06:40:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:59.268 Running I/O for 2 seconds... 00:23:01.173 00:23:01.173 Latency(us) 00:23:01.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.173 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:01.173 nvme0n1 : 2.00 27699.28 108.20 0.00 0.00 4615.59 1891.61 8936.73 00:23:01.173 =================================================================================================================== 00:23:01.173 Total : 27699.28 108.20 0.00 0.00 4615.59 1891.61 8936.73 00:23:01.173 0 00:23:01.173 06:40:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:01.173 06:40:53 -- host/digest.sh@92 -- # get_accel_stats 00:23:01.173 06:40:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:01.173 06:40:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:01.173 06:40:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:01.173 | select(.opcode=="crc32c") 00:23:01.173 | "\(.module_name) \(.executed)"' 00:23:01.431 06:40:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:01.431 06:40:53 -- host/digest.sh@93 -- # exp_module=software 00:23:01.431 06:40:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:01.431 06:40:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:01.431 06:40:53 -- host/digest.sh@97 -- # killprocess 97020 00:23:01.431 06:40:53 -- common/autotest_common.sh@926 -- # '[' -z 97020 ']' 00:23:01.431 06:40:53 -- common/autotest_common.sh@930 -- # kill -0 97020 00:23:01.431 06:40:53 -- common/autotest_common.sh@931 -- # uname 00:23:01.431 06:40:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:01.431 06:40:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97020 00:23:01.431 killing process with pid 97020 00:23:01.431 Received shutdown signal, test time was about 2.000000 seconds 00:23:01.431 00:23:01.431 Latency(us) 00:23:01.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.431 =================================================================================================================== 00:23:01.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.431 06:40:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:01.431 06:40:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:01.431 06:40:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97020' 00:23:01.431 06:40:54 -- common/autotest_common.sh@945 -- # kill 97020 00:23:01.431 06:40:54 -- common/autotest_common.sh@950 -- # wait 97020 00:23:01.712 06:40:54 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:23:01.712 06:40:54 -- host/digest.sh@77 -- # local rw bs qd 00:23:01.712 06:40:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:01.712 06:40:54 -- host/digest.sh@80 -- # rw=randwrite 00:23:01.712 06:40:54 -- host/digest.sh@80 -- # bs=131072 00:23:01.712 06:40:54 -- host/digest.sh@80 -- # qd=16 00:23:01.712 06:40:54 -- host/digest.sh@82 -- # bperfpid=97098 00:23:01.712 06:40:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:01.712 06:40:54 -- host/digest.sh@83 -- # waitforlisten 97098 /var/tmp/bperf.sock 00:23:01.712 06:40:54 -- common/autotest_common.sh@819 -- # '[' -z 97098 ']' 00:23:01.712 06:40:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:01.712 06:40:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:01.712 06:40:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:01.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:01.713 06:40:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:01.713 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:23:01.713 [2024-10-04 06:40:54.328535] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:01.713 [2024-10-04 06:40:54.328940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97098 ] 00:23:01.713 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:01.713 Zero copy mechanism will not be used. 00:23:01.980 [2024-10-04 06:40:54.462370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.980 [2024-10-04 06:40:54.541126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.980 06:40:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:01.980 06:40:54 -- common/autotest_common.sh@852 -- # return 0 00:23:01.980 06:40:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:01.980 06:40:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:01.980 06:40:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:02.238 06:40:54 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.238 06:40:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.496 nvme0n1 00:23:02.755 06:40:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:02.755 06:40:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:02.755 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:02.755 Zero copy mechanism will not be used. 00:23:02.755 Running I/O for 2 seconds... 00:23:04.658 00:23:04.658 Latency(us) 00:23:04.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.658 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:04.658 nvme0n1 : 2.00 8557.90 1069.74 0.00 0.00 1865.62 1534.14 10366.60 00:23:04.658 =================================================================================================================== 00:23:04.658 Total : 8557.90 1069.74 0.00 0.00 1865.62 1534.14 10366.60 00:23:04.658 0 00:23:04.658 06:40:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:04.658 06:40:57 -- host/digest.sh@92 -- # get_accel_stats 00:23:04.658 06:40:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:04.658 06:40:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:04.658 06:40:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:04.658 | select(.opcode=="crc32c") 00:23:04.658 | "\(.module_name) \(.executed)"' 00:23:04.917 06:40:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:04.917 06:40:57 -- host/digest.sh@93 -- # exp_module=software 00:23:04.917 06:40:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:04.917 06:40:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:04.917 06:40:57 -- host/digest.sh@97 -- # killprocess 97098 00:23:04.917 06:40:57 -- common/autotest_common.sh@926 -- # '[' -z 97098 ']' 00:23:04.917 06:40:57 -- common/autotest_common.sh@930 -- # kill -0 97098 00:23:04.917 06:40:57 -- common/autotest_common.sh@931 -- # uname 00:23:04.917 06:40:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:04.917 06:40:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97098 00:23:05.175 killing process with pid 97098 00:23:05.175 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.175 00:23:05.175 Latency(us) 00:23:05.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.175 =================================================================================================================== 00:23:05.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.175 06:40:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:05.175 06:40:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:05.175 06:40:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97098' 00:23:05.175 06:40:57 -- common/autotest_common.sh@945 -- # kill 97098 00:23:05.175 06:40:57 -- common/autotest_common.sh@950 -- # wait 97098 00:23:05.434 06:40:57 -- host/digest.sh@126 -- # killprocess 96790 00:23:05.434 06:40:57 -- common/autotest_common.sh@926 -- # '[' -z 96790 ']' 00:23:05.434 06:40:57 -- common/autotest_common.sh@930 -- # kill -0 96790 00:23:05.434 06:40:57 -- common/autotest_common.sh@931 -- # uname 00:23:05.434 06:40:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:05.434 06:40:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96790 00:23:05.434 killing process with pid 96790 00:23:05.434 06:40:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:05.434 06:40:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:05.434 06:40:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96790' 00:23:05.434 06:40:57 -- common/autotest_common.sh@945 -- # kill 96790 00:23:05.434 06:40:57 -- common/autotest_common.sh@950 -- # wait 96790 00:23:05.693 ************************************ 00:23:05.693 END TEST nvmf_digest_clean 00:23:05.693 ************************************ 00:23:05.693 00:23:05.693 real 0m17.474s 00:23:05.693 user 0m32.581s 00:23:05.693 sys 0m4.719s 00:23:05.693 06:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.693 06:40:58 -- common/autotest_common.sh@10 -- # set +x 00:23:05.693 06:40:58 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:23:05.693 06:40:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:05.693 06:40:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:05.693 06:40:58 -- common/autotest_common.sh@10 -- # set +x 00:23:05.693 ************************************ 00:23:05.693 START TEST nvmf_digest_error 00:23:05.693 ************************************ 00:23:05.693 06:40:58 -- common/autotest_common.sh@1104 -- # run_digest_error 00:23:05.693 06:40:58 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:23:05.693 06:40:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:05.693 06:40:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:05.693 06:40:58 -- common/autotest_common.sh@10 -- # set +x 00:23:05.693 06:40:58 -- nvmf/common.sh@469 -- # nvmfpid=97197 00:23:05.693 06:40:58 -- nvmf/common.sh@470 -- # waitforlisten 97197 00:23:05.693 06:40:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:05.693 06:40:58 -- common/autotest_common.sh@819 -- # '[' -z 97197 ']' 00:23:05.693 06:40:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.693 06:40:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:05.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.693 06:40:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.693 06:40:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:05.693 06:40:58 -- common/autotest_common.sh@10 -- # set +x 00:23:05.693 [2024-10-04 06:40:58.292473] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:05.693 [2024-10-04 06:40:58.293167] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.951 [2024-10-04 06:40:58.432802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.951 [2024-10-04 06:40:58.498472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:05.951 [2024-10-04 06:40:58.498617] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.951 [2024-10-04 06:40:58.498629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.951 [2024-10-04 06:40:58.498638] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.951 [2024-10-04 06:40:58.498660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.887 06:40:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:06.887 06:40:59 -- common/autotest_common.sh@852 -- # return 0 00:23:06.887 06:40:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:06.887 06:40:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:06.887 06:40:59 -- common/autotest_common.sh@10 -- # set +x 00:23:06.887 06:40:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.887 06:40:59 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:06.887 06:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.888 06:40:59 -- common/autotest_common.sh@10 -- # set +x 00:23:06.888 [2024-10-04 06:40:59.323214] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:06.888 06:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.888 06:40:59 -- host/digest.sh@104 -- # common_target_config 00:23:06.888 06:40:59 -- host/digest.sh@43 -- # rpc_cmd 00:23:06.888 06:40:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.888 06:40:59 -- common/autotest_common.sh@10 -- # set +x 00:23:06.888 null0 00:23:06.888 [2024-10-04 06:40:59.465795] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.888 [2024-10-04 06:40:59.489961] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.888 06:40:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.888 06:40:59 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:23:06.888 06:40:59 -- host/digest.sh@54 -- # local rw bs qd 00:23:06.888 06:40:59 -- host/digest.sh@56 -- # rw=randread 00:23:06.888 06:40:59 -- host/digest.sh@56 -- # bs=4096 00:23:06.888 06:40:59 -- host/digest.sh@56 -- # qd=128 00:23:06.888 06:40:59 -- host/digest.sh@58 -- # bperfpid=97246 00:23:06.888 06:40:59 -- host/digest.sh@60 -- # waitforlisten 97246 /var/tmp/bperf.sock 00:23:06.888 06:40:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:06.888 06:40:59 -- common/autotest_common.sh@819 -- # '[' -z 97246 ']' 00:23:06.888 06:40:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:06.888 06:40:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:06.888 06:40:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:06.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:06.888 06:40:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:06.888 06:40:59 -- common/autotest_common.sh@10 -- # set +x 00:23:06.888 [2024-10-04 06:40:59.542299] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:06.888 [2024-10-04 06:40:59.542375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97246 ] 00:23:07.146 [2024-10-04 06:40:59.677179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.146 [2024-10-04 06:40:59.756667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.082 06:41:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:08.082 06:41:00 -- common/autotest_common.sh@852 -- # return 0 00:23:08.082 06:41:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:08.082 06:41:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:08.340 06:41:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:08.340 06:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.340 06:41:00 -- common/autotest_common.sh@10 -- # set +x 00:23:08.340 06:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.340 06:41:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.340 06:41:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.598 nvme0n1 00:23:08.598 06:41:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:08.598 06:41:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.598 06:41:01 -- common/autotest_common.sh@10 -- # set +x 00:23:08.598 06:41:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.598 06:41:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:08.598 06:41:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:08.598 Running I/O for 2 seconds... 00:23:08.884 [2024-10-04 06:41:01.291194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.291256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.291277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.304777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.304808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.304854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.317328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.317370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.317382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.327092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.327123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.327135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.336895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.336937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.336949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.349492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.349523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.349535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.361389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.361432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.361443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.370532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.370576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.370587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.385909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.385941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.385954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.395892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.395921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.395933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.407901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.407931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.407944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.420857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.420887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.420899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.432807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.432847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.432862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.445949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.445990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.446002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.457973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.458002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.458018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.466134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.466164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.466175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.478717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.478749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.478761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.491660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.491704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.491716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.503675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.503705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.503717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.514616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.514657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.514669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.528032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.528062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.528074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.538348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.538378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.538389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.547250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.547280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.547292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.885 [2024-10-04 06:41:01.560341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:08.885 [2024-10-04 06:41:01.560372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.885 [2024-10-04 06:41:01.560384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.571716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.571746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.571757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.581457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.581499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.581512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.591482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.591511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.591523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.602100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.602130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.602142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.613823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.613861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.613873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.625032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.625062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.625074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.634802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.634843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.634855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.646652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.646682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.646693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.659090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.659120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.659132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.669468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.669510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.669522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.681473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.681503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.681515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.694619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.694649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.694661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.706883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.706924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.706935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.716548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.716577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.716589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.725968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.725999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.726011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.734444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.734473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.734489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.745076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.745105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.745117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.754550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.754580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.754591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.763949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.143 [2024-10-04 06:41:01.763979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.143 [2024-10-04 06:41:01.763990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.143 [2024-10-04 06:41:01.774401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.144 [2024-10-04 06:41:01.774442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.144 [2024-10-04 06:41:01.774454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.144 [2024-10-04 06:41:01.782905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.144 [2024-10-04 06:41:01.782934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.144 [2024-10-04 06:41:01.782945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.144 [2024-10-04 06:41:01.793712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.144 [2024-10-04 06:41:01.793742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.144 [2024-10-04 06:41:01.793755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.144 [2024-10-04 06:41:01.804325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.144 [2024-10-04 06:41:01.804355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.144 [2024-10-04 06:41:01.804367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.144 [2024-10-04 06:41:01.813283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.144 [2024-10-04 06:41:01.813313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.144 [2024-10-04 06:41:01.813325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.144 [2024-10-04 06:41:01.822598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.144 [2024-10-04 06:41:01.822640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.144 [2024-10-04 06:41:01.822652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.401 [2024-10-04 06:41:01.832340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.832371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.832382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.842230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.842260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.842271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.854520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.854552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.854563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.866405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.866435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.866446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.878651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.878694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.878706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.891563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.891595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.891606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.903003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.903050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.903062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.911812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.911874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.911887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.921056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.921088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.921100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.932482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.932513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.932525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.944141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.944176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.944192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.954783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.954824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.954837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.962932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.962960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.962971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.975471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.975502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.975513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.987176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.987206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.987217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:01.999871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:01.999901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:01.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.012285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.012316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.012331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.021305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.021336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.021347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.030900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.030929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.030941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.040414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.040445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.040456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.051203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.051249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.051261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.061458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.061488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.061500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.070425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.070456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.070468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.402 [2024-10-04 06:41:02.080370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.402 [2024-10-04 06:41:02.080400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.402 [2024-10-04 06:41:02.080412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.088990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.089018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.089035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.100381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.100411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.100422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.110933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.110963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.110978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.121767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.121797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.121809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.131014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.131045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.131056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.141280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.141320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.141332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.151216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.151247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.151259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.160403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.160433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.160445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.170243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.170285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.170296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.179521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.179552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.179563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.188592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.188622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.188633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.200679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.200709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.200721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.208950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.208979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.208990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.219806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.219846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.219858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.232058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.232087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.232098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.240801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.240842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.253564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.253594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.253605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.263774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.263804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.263827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.273369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.273398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.662 [2024-10-04 06:41:02.273411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.662 [2024-10-04 06:41:02.283742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.662 [2024-10-04 06:41:02.283772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.663 [2024-10-04 06:41:02.283784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.663 [2024-10-04 06:41:02.292988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.663 [2024-10-04 06:41:02.293017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.663 [2024-10-04 06:41:02.293028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.663 [2024-10-04 06:41:02.304599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.663 [2024-10-04 06:41:02.304630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.663 [2024-10-04 06:41:02.304642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.663 [2024-10-04 06:41:02.317551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.663 [2024-10-04 06:41:02.317593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.663 [2024-10-04 06:41:02.317605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.663 [2024-10-04 06:41:02.326931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.663 [2024-10-04 06:41:02.326960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.663 [2024-10-04 06:41:02.326971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.663 [2024-10-04 06:41:02.337549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.663 [2024-10-04 06:41:02.337579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.663 [2024-10-04 06:41:02.337591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.922 [2024-10-04 06:41:02.351118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.922 [2024-10-04 06:41:02.351149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.922 [2024-10-04 06:41:02.351160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.922 [2024-10-04 06:41:02.360847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.360876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.360888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.372144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.372174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.372190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.384674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.384716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.395989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.396019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.396031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.406893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.406923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.406934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.416855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.416886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.416897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.426369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.426400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.426412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.436183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.436213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.436224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.445296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.445325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.445337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.456958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.457000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.457012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.466609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.466651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.466662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.477019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.477061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.477073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.485402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.485432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.495344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.495386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.495397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.505641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.505670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.505681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.519250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.519283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.519295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.529841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.529875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.529887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.538889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.538919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.538930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.551940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.551981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.551992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.563774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.563804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.563825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.576190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.576220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.576232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.588936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.588965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.588977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.923 [2024-10-04 06:41:02.600862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:09.923 [2024-10-04 06:41:02.600892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.923 [2024-10-04 06:41:02.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.612893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.612934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.612945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.622211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.622247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.622262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.631732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.631776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.631788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.640879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.640908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.640920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.650903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.650932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.650943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.662468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.662498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.662513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.674246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.674276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.674287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.686738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.686768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.686780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.699450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.699480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.699495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.708207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.708236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.708247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.720613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.720644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.720655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.731454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.731484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.731497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.744362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.744392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.744404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.755338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.755369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.755380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.766988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.767026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.767037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.777067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.777097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.777108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.788042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.788072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.788088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.797300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.797330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.797341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.806889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.806918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.806930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.816219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.816249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.816261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.825706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.825748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.825759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.834769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.834800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.834811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.844612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.844655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.844666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.184 [2024-10-04 06:41:02.855028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.184 [2024-10-04 06:41:02.855067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.184 [2024-10-04 06:41:02.855079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.444 [2024-10-04 06:41:02.866260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.444 [2024-10-04 06:41:02.866289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.444 [2024-10-04 06:41:02.866301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.444 [2024-10-04 06:41:02.874591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.444 [2024-10-04 06:41:02.874621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.444 [2024-10-04 06:41:02.874632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.444 [2024-10-04 06:41:02.886273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.444 [2024-10-04 06:41:02.886303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.444 [2024-10-04 06:41:02.886314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.444 [2024-10-04 06:41:02.899225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.444 [2024-10-04 06:41:02.899256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.444 [2024-10-04 06:41:02.899268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.444 [2024-10-04 06:41:02.911922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.444 [2024-10-04 06:41:02.911952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.444 [2024-10-04 06:41:02.911963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.444 [2024-10-04 06:41:02.923344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.444 [2024-10-04 06:41:02.923375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.444 [2024-10-04 06:41:02.923386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.444 [2024-10-04 06:41:02.934392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.444 [2024-10-04 06:41:02.934424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.444 [2024-10-04 06:41:02.934435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:02.944255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:02.944299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:02.944310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:02.956138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:02.956169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:02.956181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:02.968770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:02.968801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:02.968813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:02.981343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:02.981374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:02.981385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:02.992276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:02.992305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:02.992317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.003095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.003125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.003136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.016413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.016445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.016457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.025815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.025891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.025903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.036452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.036483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.036495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.049015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.049046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.049061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.060731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.060762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.060773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.072557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.072599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.072611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.085541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.085583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.085594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.097806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.097846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.097858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.107110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.107140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.107152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.445 [2024-10-04 06:41:03.117068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.445 [2024-10-04 06:41:03.117098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.445 [2024-10-04 06:41:03.117111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.704 [2024-10-04 06:41:03.126015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.704 [2024-10-04 06:41:03.126046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.704 [2024-10-04 06:41:03.126062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.704 [2024-10-04 06:41:03.137781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.704 [2024-10-04 06:41:03.137813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.704 [2024-10-04 06:41:03.137837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.146703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.146745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.146756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.156701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.156742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.156754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.165343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.165373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.165385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.175471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.175501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.175512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.185712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.185754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.185766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.196309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.196340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.196351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.206630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.206660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.206671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.218845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.218884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.218896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.226991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.227027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.227038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.240071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.240102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.240114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.252349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.252382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.252394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.264762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.264793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.264805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 [2024-10-04 06:41:03.276770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfd7f0) 00:23:10.705 [2024-10-04 06:41:03.276800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.705 [2024-10-04 06:41:03.276812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.705 00:23:10.705 Latency(us) 00:23:10.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.705 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:10.705 nvme0n1 : 2.00 23468.16 91.67 0.00 0.00 5449.08 2353.34 17396.83 00:23:10.705 =================================================================================================================== 00:23:10.705 Total : 23468.16 91.67 0.00 0.00 5449.08 2353.34 17396.83 00:23:10.705 0 00:23:10.705 06:41:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:10.705 06:41:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:10.705 | .driver_specific 00:23:10.705 | .nvme_error 00:23:10.705 | .status_code 00:23:10.705 | .command_transient_transport_error' 00:23:10.705 06:41:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:10.705 06:41:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:10.964 06:41:03 -- host/digest.sh@71 -- # (( 184 > 0 )) 00:23:10.964 06:41:03 -- host/digest.sh@73 -- # killprocess 97246 00:23:10.964 06:41:03 -- common/autotest_common.sh@926 -- # '[' -z 97246 ']' 00:23:10.964 06:41:03 -- common/autotest_common.sh@930 -- # kill -0 97246 00:23:10.964 06:41:03 -- common/autotest_common.sh@931 -- # uname 00:23:10.964 06:41:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:10.964 06:41:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97246 00:23:10.964 killing process with pid 97246 00:23:10.964 Received shutdown signal, test time was about 2.000000 seconds 00:23:10.964 00:23:10.964 Latency(us) 00:23:10.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.964 =================================================================================================================== 00:23:10.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.964 06:41:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:10.964 06:41:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:10.964 06:41:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97246' 00:23:10.964 06:41:03 -- common/autotest_common.sh@945 -- # kill 97246 00:23:10.964 06:41:03 -- common/autotest_common.sh@950 -- # wait 97246 00:23:11.223 06:41:03 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:11.223 06:41:03 -- host/digest.sh@54 -- # local rw bs qd 00:23:11.223 06:41:03 -- host/digest.sh@56 -- # rw=randread 00:23:11.223 06:41:03 -- host/digest.sh@56 -- # bs=131072 00:23:11.223 06:41:03 -- host/digest.sh@56 -- # qd=16 00:23:11.223 06:41:03 -- host/digest.sh@58 -- # bperfpid=97332 00:23:11.223 06:41:03 -- host/digest.sh@60 -- # waitforlisten 97332 /var/tmp/bperf.sock 00:23:11.223 06:41:03 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:11.223 06:41:03 -- common/autotest_common.sh@819 -- # '[' -z 97332 ']' 00:23:11.223 06:41:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:11.223 06:41:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:11.223 06:41:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:11.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:11.223 06:41:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:11.223 06:41:03 -- common/autotest_common.sh@10 -- # set +x 00:23:11.482 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:11.482 Zero copy mechanism will not be used. 00:23:11.482 [2024-10-04 06:41:03.912159] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:11.482 [2024-10-04 06:41:03.912272] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97332 ] 00:23:11.482 [2024-10-04 06:41:04.050416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.482 [2024-10-04 06:41:04.116889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.418 06:41:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:12.418 06:41:04 -- common/autotest_common.sh@852 -- # return 0 00:23:12.418 06:41:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:12.418 06:41:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:12.677 06:41:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:12.677 06:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.677 06:41:05 -- common/autotest_common.sh@10 -- # set +x 00:23:12.677 06:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.677 06:41:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:12.677 06:41:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:12.937 nvme0n1 00:23:12.937 06:41:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:12.937 06:41:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.937 06:41:05 -- common/autotest_common.sh@10 -- # set +x 00:23:12.937 06:41:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.937 06:41:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:12.937 06:41:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:12.937 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:12.937 Zero copy mechanism will not be used. 00:23:12.937 Running I/O for 2 seconds... 00:23:12.937 [2024-10-04 06:41:05.581751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.581807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.581830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.584749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.584786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.584798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.588132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.588168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.588181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.592129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.592165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.592178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.595575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.595611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.595634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.598756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.598793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.598805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.602514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.602550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.602563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.606425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.606461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.606474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.609536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.609584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.613064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.613100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.613113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.937 [2024-10-04 06:41:05.616004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:12.937 [2024-10-04 06:41:05.616039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.937 [2024-10-04 06:41:05.616051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.619775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.620029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.620046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.623684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.623720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.623732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.627078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.627112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.627124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.629768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.629803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.629860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.632737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.632937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.632956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.636237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.636405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.636421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.639493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.639527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.639539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.642781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.642852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.642866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.646103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.646138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.646162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.648492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.648526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.648538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.652009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.652045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.652057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.654766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.654800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.654811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.657928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.657964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.657987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.660881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.660915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.660926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.663596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.663771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.663786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.667412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.667591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.667607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.670248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.670282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.670294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.673463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.673497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.673509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.676523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.676557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.676568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.679958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.199 [2024-10-04 06:41:05.679994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.199 [2024-10-04 06:41:05.680006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.199 [2024-10-04 06:41:05.682993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.683036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.683060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.686119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.686154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.686166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.688888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.689058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.689073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.692192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.692227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.692250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.695518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.695554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.695566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.698683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.698716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.698727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.702070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.702106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.702119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.705362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.705396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.705408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.708776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.708811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.708863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.711160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.711195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.711207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.714466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.714500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.714512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.717899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.717930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.717941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.721149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.721310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.721338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.724206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.724242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.724254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.727659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.727694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.727706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.730755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.730790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.730828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.733724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.733926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.733944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.736900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.736942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.736957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.740449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.740484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.740496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.743414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.743449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.743462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.746704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.746883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.746900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.750269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.750300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.753011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.753046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.753057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.756475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.756509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.756533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.760388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.760422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.760435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.763185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.763220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.763232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.766454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.766624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.200 [2024-10-04 06:41:05.766640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.200 [2024-10-04 06:41:05.770602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.200 [2024-10-04 06:41:05.770761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.770777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.774253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.774288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.774301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.777719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.777753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.777764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.781285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.781320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.781332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.784453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.784487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.784499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.788110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.788293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.788408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.791519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.791555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.791567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.794793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.794992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.795111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.797331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.797493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.797632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.801022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.801193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.801348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.805101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.805136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.805148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.808406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.808591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.808608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.811619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.811656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.811668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.815209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.815243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.815255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.818247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.818281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.818293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.821413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.821447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.821459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.824570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.824746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.824762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.827580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.827615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.827637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.830211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.830245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.830257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.833773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.833808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.833840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.836759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.836795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.836831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.840278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.840313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.840325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.843606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.843641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.843654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.847118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.847153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.847165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.849973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.850008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.850020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.853027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.853062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.853074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.856044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.856077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.856088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.201 [2024-10-04 06:41:05.859198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.201 [2024-10-04 06:41:05.859233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.201 [2024-10-04 06:41:05.859246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.202 [2024-10-04 06:41:05.862461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.202 [2024-10-04 06:41:05.862494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.202 [2024-10-04 06:41:05.862505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.202 [2024-10-04 06:41:05.865740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.202 [2024-10-04 06:41:05.865774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.202 [2024-10-04 06:41:05.865785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.202 [2024-10-04 06:41:05.868724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.202 [2024-10-04 06:41:05.868919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.202 [2024-10-04 06:41:05.868935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.202 [2024-10-04 06:41:05.872165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.202 [2024-10-04 06:41:05.872200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.202 [2024-10-04 06:41:05.872212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.202 [2024-10-04 06:41:05.875338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.202 [2024-10-04 06:41:05.875373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.202 [2024-10-04 06:41:05.875385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.878456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.878489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.878500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.882086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.882121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.882133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.885390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.885433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.885448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.888012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.888047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.888059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.890941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.890974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.890998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.894216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.894261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.894285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.897363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.897397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.897409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.900427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.900462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.900474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.904510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.904696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.904826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.908511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.908670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.908686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.911997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.912156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.912173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.914810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.463 [2024-10-04 06:41:05.914863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.463 [2024-10-04 06:41:05.914875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.463 [2024-10-04 06:41:05.918007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.918042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.918066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.921422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.921456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.921468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.924987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.925038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.925051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.928174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.928209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.928221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.931890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.931923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.931934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.934843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.934886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.934905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.938026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.938061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.938082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.940757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.940943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.940959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.943956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.943987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.943998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.947155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.947191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.947203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.950479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.950514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.950537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.953970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.954005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.954018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.957047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.957082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.957094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.960280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.960325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.960337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.963377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.963411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.963431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.966551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.966584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.966603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.969607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.969780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.969795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.973175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.973210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.973222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.976307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.976341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.976353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.979549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.979582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.979594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.982546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.982579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.982591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.986154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.986190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.986213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.989605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.989639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.989663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.992044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.992206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.992221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.995825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.996028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.996144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:05.999061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:05.999246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:05.999480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:06.002545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:06.002730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.464 [2024-10-04 06:41:06.002860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.464 [2024-10-04 06:41:06.006647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.464 [2024-10-04 06:41:06.006835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.006962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.010450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.010619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.010732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.014219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.014376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.014488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.017775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.017835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.021264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.021300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.021312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.024046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.024082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.024094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.027070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.027230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.027246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.030587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.030622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.030634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.033616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.033651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.033663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.036868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.036896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.036907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.040163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.040197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.040209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.043150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.043185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.043196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.046049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.046083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.046095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.049383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.049420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.049431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.052661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.052695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.052707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.056383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.056418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.059290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.059341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.059353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.062703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.062883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.062905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.066059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.066094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.066106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.069592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.069627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.069650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.072406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.072441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.072464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.075733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.075919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.075938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.079373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.079547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.079563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.082430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.082464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.082476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.085974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.086164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.086296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.089364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.089558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.089700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.092945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.093109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.093126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.465 [2024-10-04 06:41:06.096001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.465 [2024-10-04 06:41:06.096037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.465 [2024-10-04 06:41:06.096057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.099416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.099458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.099479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.102752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.102963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.102982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.106310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.106347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.106359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.109503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.109538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.109550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.112867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.112899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.112911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.115173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.115346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.115362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.118436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.118478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.118490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.122413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.122602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.122699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.126539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.126723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.126909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.130202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.130379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.130502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.133934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.134105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.134273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.466 [2024-10-04 06:41:06.137693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.466 [2024-10-04 06:41:06.137871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.466 [2024-10-04 06:41:06.137887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.141442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.141478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.141490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.144306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.144341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.144353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.147691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.147727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.147740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.151077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.151114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.151126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.154195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.154230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.154242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.157948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.157982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.157994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.161713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.161747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.161759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.164167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.164202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.164214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.167309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.167344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.167356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.169964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.169996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.170015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.173205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.173249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.173262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.176186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.176221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.176233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.179442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.179477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.179489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.182715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.182897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.182913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.185706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.185743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.185755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.188793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.188843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.188856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.192171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.192204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.192216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.195182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.195216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.727 [2024-10-04 06:41:06.195227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.727 [2024-10-04 06:41:06.198101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.727 [2024-10-04 06:41:06.198272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.198288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.201842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.202023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.202038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.205721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.205928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.205944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.208693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.208723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.208735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.212167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.212201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.212212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.215260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.215295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.215308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.218549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.218738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.218754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.222047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.222083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.222094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.225006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.225040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.225051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.228525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.228558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.228570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.231647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.231680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.231692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.234949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.234983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.234995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.238304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.238474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.238490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.241873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.241920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.241932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.245137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.245173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.245185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.248770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.248806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.248838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.252078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.252113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.252125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.255034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.255077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.255089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.258278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.258312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.258331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.261726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.261906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.261928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.264909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.264938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.264950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.268002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.268037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.268048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.271258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.271293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.271305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.274754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.274794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.274823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.278279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.278313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.278324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.281294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.281465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.281488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.284808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.284857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.284869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.728 [2024-10-04 06:41:06.288269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.728 [2024-10-04 06:41:06.288302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.728 [2024-10-04 06:41:06.288314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.291451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.291486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.291505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.294242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.294277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.294288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.297858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.297893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.297904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.300617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.300652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.300673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.304469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.304635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.304651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.308126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.308311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.308430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.311853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.312028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.312167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.315110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.315271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.315288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.318717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.318769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.318781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.322252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.322288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.322311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.325304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.325349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.325362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.328544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.328580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.328591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.332100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.332134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.332146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.335513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.335548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.335570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.338668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.338704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.338716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.341794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.341864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.341877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.345718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.345890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.349131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.349163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.349183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.352692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.352726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.352738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.355576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.355610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.355629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.358886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.358919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.358930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.362023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.362076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.362088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.365130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.365165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.365176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.367915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.367952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.367964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.371257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.371292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.371305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.374053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.374087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.374099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.729 [2024-10-04 06:41:06.377216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.729 [2024-10-04 06:41:06.377250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.729 [2024-10-04 06:41:06.377263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.380316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.380350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.380374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.383594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.383629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.383641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.387119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.387156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.387168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.390704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.390737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.390749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.394327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.394362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.394386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.397679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.397714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.397726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.400824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.400879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.400890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.730 [2024-10-04 06:41:06.404154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.730 [2024-10-04 06:41:06.404206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.730 [2024-10-04 06:41:06.404218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.407623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.407658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.407677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.411150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.411185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.411198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.414319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.414353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.414365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.417048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.417081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.417093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.420880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.420915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.420927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.423744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.423778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.423790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.426970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.427005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.427025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.430019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.430052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.430064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.433383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.433417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.433429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.436537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.436571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.436582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.439785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.991 [2024-10-04 06:41:06.439829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.991 [2024-10-04 06:41:06.439842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.991 [2024-10-04 06:41:06.442795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.442837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.442849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.446198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.446376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.446393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.449299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.449334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.449353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.452453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.452488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.452499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.455936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.455970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.455983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.459192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.459227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.459239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.462368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.462401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.462413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.465225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.465268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.465279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.468335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.468504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.468520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.471599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.471629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.471641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.474735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.474769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.474780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.477877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.477910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.477921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.481270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.481431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.481447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.484368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.484406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.484418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.487874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.487905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.487917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.491277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.491312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.491324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.494575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.494739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.494754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.498048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.498211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.498227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.501467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.501504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.501515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.505063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.505097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.505109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.508438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.508470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.508482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.511083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.511117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.511129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.514353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.514389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.514401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.517163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.517198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.517210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.520121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.520156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.520168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.524058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.524093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.524105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.526759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.526793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.526804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.992 [2024-10-04 06:41:06.530087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.992 [2024-10-04 06:41:06.530121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.992 [2024-10-04 06:41:06.530133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.533620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.533655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.533667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.536978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.537012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.537024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.540415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.540590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.540607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.544002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.544039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.544052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.547690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.547725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.547737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.551113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.551150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.551163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.553967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.554001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.554024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.557171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.557206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.557218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.559881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.559914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.559925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.563441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.563481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.563493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.566478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.566638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.566654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.569998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.570033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.570045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.573654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.573689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.573712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.577193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.577238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.577258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.580284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.580319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.580330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.583642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.583676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.583688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.586631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.586787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.586802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.589738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.589774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.589786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.593067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.593104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.593116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.596476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.596511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.596523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.599690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.599724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.599736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.602740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.602907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.602923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.605576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.605611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.605623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.608285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.608319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.608331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.611107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.611142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.611154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.613850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.613882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.613894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.993 [2024-10-04 06:41:06.617348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.993 [2024-10-04 06:41:06.617382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.993 [2024-10-04 06:41:06.617394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.620361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.620396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.620407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.623548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.623705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.623727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.627197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.627233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.627244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.630478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.630513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.630525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.634050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.634086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.634098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.636713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.636748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.636759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.639740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.639776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.639788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.643442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.643483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.643495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.646554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.646588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.646607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.649547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.649729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.649753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.653304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.653484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.653600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.656755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.656933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.657087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.660311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.660474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.660490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.663700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.663881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.663994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.994 [2024-10-04 06:41:06.666843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:13.994 [2024-10-04 06:41:06.667037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.994 [2024-10-04 06:41:06.667205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.670108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.670294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.670419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.673909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.674098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.674228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.677798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.677996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.678115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.681663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.681866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.681995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.685146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.685336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.685355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.688367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.688403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.688415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.691988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.692025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.692037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.695447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.695605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.695621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.698804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.698850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.698880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.701678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.701712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.701724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.705026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.705062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.705073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.708005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.708039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.708060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.711258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.711294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.711306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.713997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.714030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.714042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.716486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.716520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.716543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.719993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.255 [2024-10-04 06:41:06.720028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.255 [2024-10-04 06:41:06.720040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.255 [2024-10-04 06:41:06.723139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.723176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.723204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.726647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.726828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.726846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.730254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.730289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.730312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.733489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.733524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.733536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.736770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.736803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.736823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.739913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.739948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.739960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.743246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.743288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.743310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.746493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.746527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.746545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.749295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.749329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.749341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.752694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.752728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.752740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.755550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.755584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.755596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.759154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.759321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.759352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.762623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.762803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.762946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.766721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.766756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.766768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.769707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.769876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.769893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.772751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.772787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.772809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.776118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.776154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.776166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.779072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.779106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.779118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.781956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.781991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.782002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.785001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.785034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.785045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.788544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.788574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.788585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.791302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.791588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.791618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.794599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.794636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.794647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.797778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.797832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.797863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.801239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.801269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.801279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.804052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.804081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.804092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.806695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.256 [2024-10-04 06:41:06.806735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.256 [2024-10-04 06:41:06.806746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.256 [2024-10-04 06:41:06.809286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.809316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.809331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.812405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.812434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.812445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.815929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.815960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.815971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.818629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.818658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.818669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.822369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.822399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.822410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.825517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.825547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.825558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.828793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.828833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.828845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.832506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.832536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.832547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.835606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.835636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.835647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.838091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.838119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.838130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.841041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.841070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.841081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.844742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.844784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.844795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.848049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.848079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.848089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.850921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.850951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.850962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.854216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.854246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.854256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.857286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.857316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.857327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.860972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.861003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.861014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.863751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.863780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.863792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.867058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.867089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.867100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.869956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.869984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.870002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.872984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.873014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.873025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.876352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.876382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.876392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.879186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.879216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.879226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.882367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.882407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.882417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.885402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.885431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.885448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.888497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.888527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.888538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.891994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.892021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.257 [2024-10-04 06:41:06.892033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.257 [2024-10-04 06:41:06.895525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.257 [2024-10-04 06:41:06.895554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.895565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.897918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.897946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.897957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.900979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.901010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.901021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.903925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.903953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.903963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.907735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.907775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.907786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.911094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.911123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.911134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.914266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.914295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.914306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.917515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.917545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.917555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.920329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.920369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.920380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.923459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.923490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.923501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.926467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.926495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.926506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.258 [2024-10-04 06:41:06.929332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.258 [2024-10-04 06:41:06.929363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.258 [2024-10-04 06:41:06.929374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.519 [2024-10-04 06:41:06.932783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.519 [2024-10-04 06:41:06.932812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.519 [2024-10-04 06:41:06.932837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.519 [2024-10-04 06:41:06.935720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.519 [2024-10-04 06:41:06.935749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.519 [2024-10-04 06:41:06.935760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.519 [2024-10-04 06:41:06.938550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.519 [2024-10-04 06:41:06.938579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.519 [2024-10-04 06:41:06.938590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.519 [2024-10-04 06:41:06.942042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.519 [2024-10-04 06:41:06.942071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.519 [2024-10-04 06:41:06.942086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.519 [2024-10-04 06:41:06.945019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.519 [2024-10-04 06:41:06.945050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.945061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.947795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.947837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.947848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.951149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.951179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.951190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.954150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.954190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.954201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.957599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.957630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.957641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.960685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.960725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.960735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.963884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.963913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.963924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.966637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.966666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.966678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.970148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.970190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.973005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.973035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.973050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.975844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.975872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.975883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.978930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.978957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.978967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.981976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.982006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.982017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.985192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.985222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.985233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.987873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.987914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.987924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.990827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.990865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.990876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.994265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.994306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.994317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:06.996919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:06.996948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:06.996959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.000010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.000040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.000051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.003545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.003575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.003586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.006602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.006630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.006640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.010159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.010189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.010201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.013494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.013523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.013534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.016630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.016671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.016682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.019455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.019485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.019495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.022772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.022800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.022811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.025779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.025807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.025828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.520 [2024-10-04 06:41:07.028699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.520 [2024-10-04 06:41:07.028728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.520 [2024-10-04 06:41:07.028739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.031960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.032000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.032011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.034996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.035044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.035057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.037893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.037919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.037930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.041419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.041459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.041470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.044606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.044637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.044648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.047489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.047518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.047532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.050637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.050667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.050677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.053376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.053406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.053416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.056230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.056259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.056274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.059471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.059510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.059521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.062689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.062718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.062729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.065411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.065441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.065451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.068790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.068841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.068853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.071729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.071759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.071770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.074989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.075025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.075036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.077910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.077951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.077963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.080998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.081028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.081039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.084540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.084570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.084581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.088067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.088098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.088110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.090806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.090844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.090857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.094013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.094043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.094054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.097318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.097348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.097359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.100527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.100547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.100573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.103839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.103868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.103879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.107282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.107315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.107326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.109649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.109698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.109712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.113273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.521 [2024-10-04 06:41:07.113311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.521 [2024-10-04 06:41:07.113323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.521 [2024-10-04 06:41:07.116921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.116953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.116972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.120287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.120337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.120349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.124092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.124127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.124139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.127520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.127571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.127592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.130647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.130695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.130706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.134111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.134147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.134158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.137312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.137363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.137374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.140686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.140735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.140746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.144023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.144073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.144085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.147339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.147389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.151050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.151086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.151097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.154190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.154239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.154260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.156499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.156548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.156568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.160279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.160328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.160350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.163385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.163446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.163458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.166911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.166960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.166972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.170067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.170103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.170114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.173096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.173131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.173142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.176397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.176448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.176459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.179542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.179591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.179603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.182709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.182757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.182768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.185709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.185758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.185779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.189287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.189350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.192031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.192082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.192093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.522 [2024-10-04 06:41:07.195031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.522 [2024-10-04 06:41:07.195065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.522 [2024-10-04 06:41:07.195076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.198305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.198354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.198374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.201532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.201583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.201595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.205161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.205212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.205224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.208361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.208411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.208423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.211799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.211859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.211871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.215274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.215310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.215330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.218458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.218507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.218527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.221904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.221952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.221964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.225375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.225424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.225435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.228136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.228187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.228198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.230664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.230712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.230723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.234190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.234256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.234268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.237331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.237382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.237394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.241028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.241080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.241101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.244640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.244692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.244714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.248113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.248149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.784 [2024-10-04 06:41:07.250632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.784 [2024-10-04 06:41:07.250682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.784 [2024-10-04 06:41:07.250694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.254197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.254247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.254267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.257754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.257805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.257828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.260904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.260952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.260964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.264072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.264109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.264120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.267485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.267535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.267546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.270644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.270693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.270717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.273958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.274008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.274020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.277580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.277630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.277642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.280873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.280921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.280932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.284150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.284185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.284196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.287499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.287548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.287560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.290733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.290780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.290791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.294159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.294210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.294222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.297172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.297208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.297219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.300468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.300518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.300529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.304089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.304139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.304151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.307438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.307496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.307507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.310713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.310762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.310773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.313947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.313997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.314009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.317433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.317484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.317495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.321126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.321176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.321196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.324337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.324388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.324399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.327791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.327852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.331312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.331371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.331383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.334245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.334295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.334316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.337469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.337520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.337540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.340178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.340212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.785 [2024-10-04 06:41:07.340223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.785 [2024-10-04 06:41:07.343964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.785 [2024-10-04 06:41:07.344014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.344027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.346956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.347003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.347035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.350882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.350931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.350953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.353428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.353478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.353489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.357072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.357123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.357135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.360228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.360294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.360305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.363927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.363978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.363989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.367506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.367557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.367568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.370932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.370966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.370978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.374175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.374227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.374238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.377539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.377588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.377599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.380164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.380198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.380219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.383533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.383584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.383596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.387106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.387142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.387153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.390079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.390113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.390125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.393423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.393474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.393485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.397319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.397372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.397384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.400819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.400878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.400890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.404647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.404718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.407977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.408014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.408026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.411813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.411871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.411884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.415237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.415276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.415288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.419000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.419062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.419075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.422573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.422623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.422634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.426253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.426305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.426316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.429664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.429714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.429734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.433197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.433265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.433292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.786 [2024-10-04 06:41:07.436416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.786 [2024-10-04 06:41:07.436468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.786 [2024-10-04 06:41:07.436480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.787 [2024-10-04 06:41:07.439668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.787 [2024-10-04 06:41:07.439721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.787 [2024-10-04 06:41:07.439732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.787 [2024-10-04 06:41:07.443374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.787 [2024-10-04 06:41:07.443427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.787 [2024-10-04 06:41:07.443439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.787 [2024-10-04 06:41:07.446898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.787 [2024-10-04 06:41:07.446946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.787 [2024-10-04 06:41:07.446957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.787 [2024-10-04 06:41:07.450102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.787 [2024-10-04 06:41:07.450137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.787 [2024-10-04 06:41:07.450148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.787 [2024-10-04 06:41:07.453311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.787 [2024-10-04 06:41:07.453360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.787 [2024-10-04 06:41:07.453372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.787 [2024-10-04 06:41:07.456503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.787 [2024-10-04 06:41:07.456552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.787 [2024-10-04 06:41:07.456563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.787 [2024-10-04 06:41:07.460053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:14.787 [2024-10-04 06:41:07.460104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.787 [2024-10-04 06:41:07.460115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.462914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.462955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.462967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.466099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.466133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.466145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.469241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.469290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.469309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.472339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.472389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.472401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.475753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.475804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.475825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.479296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.479333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.479344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.482503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.482552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.482573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.485816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.485874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.485885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.489324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.489373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.489384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.492544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.492594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.492605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.495786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.495844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.495857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.499044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.499084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.499095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.501658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.501708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.501728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.504983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.505033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.505053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.508190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.508241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.508261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.511592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.511642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.511654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.514928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.514960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.514972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.518437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.518488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.518499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.521532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.521581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.521601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.524598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.524648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.524659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.527314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.527381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.527392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.530353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.530403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.530422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.533688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.533737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.533757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.537437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.537492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.047 [2024-10-04 06:41:07.537503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.047 [2024-10-04 06:41:07.540735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.047 [2024-10-04 06:41:07.540783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.540805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.544503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.544552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.544573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.547802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.547859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.547871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.551119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.551152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.551163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.554574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.554623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.554634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.557849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.557910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.557930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.560899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.560927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.560938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.564138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.564188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.564207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.048 [2024-10-04 06:41:07.567671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8944a0) 00:23:15.048 [2024-10-04 06:41:07.567721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.048 [2024-10-04 06:41:07.567732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.048 00:23:15.048 Latency(us) 00:23:15.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.048 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:15.048 nvme0n1 : 2.00 9459.47 1182.43 0.00 0.00 1688.54 595.78 11677.32 00:23:15.048 =================================================================================================================== 00:23:15.048 Total : 9459.47 1182.43 0.00 0.00 1688.54 595.78 11677.32 00:23:15.048 0 00:23:15.048 06:41:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:15.048 06:41:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:15.048 06:41:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:15.048 06:41:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:15.048 | .driver_specific 00:23:15.048 | .nvme_error 00:23:15.048 | .status_code 00:23:15.048 | .command_transient_transport_error' 00:23:15.307 06:41:07 -- host/digest.sh@71 -- # (( 610 > 0 )) 00:23:15.307 06:41:07 -- host/digest.sh@73 -- # killprocess 97332 00:23:15.307 06:41:07 -- common/autotest_common.sh@926 -- # '[' -z 97332 ']' 00:23:15.307 06:41:07 -- common/autotest_common.sh@930 -- # kill -0 97332 00:23:15.307 06:41:07 -- common/autotest_common.sh@931 -- # uname 00:23:15.307 06:41:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:15.307 06:41:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97332 00:23:15.307 06:41:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:15.307 06:41:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:15.307 killing process with pid 97332 00:23:15.307 06:41:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97332' 00:23:15.307 Received shutdown signal, test time was about 2.000000 seconds 00:23:15.307 00:23:15.307 Latency(us) 00:23:15.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.307 =================================================================================================================== 00:23:15.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.307 06:41:07 -- common/autotest_common.sh@945 -- # kill 97332 00:23:15.307 06:41:07 -- common/autotest_common.sh@950 -- # wait 97332 00:23:15.567 06:41:08 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:15.567 06:41:08 -- host/digest.sh@54 -- # local rw bs qd 00:23:15.567 06:41:08 -- host/digest.sh@56 -- # rw=randwrite 00:23:15.567 06:41:08 -- host/digest.sh@56 -- # bs=4096 00:23:15.567 06:41:08 -- host/digest.sh@56 -- # qd=128 00:23:15.567 06:41:08 -- host/digest.sh@58 -- # bperfpid=97422 00:23:15.567 06:41:08 -- host/digest.sh@60 -- # waitforlisten 97422 /var/tmp/bperf.sock 00:23:15.567 06:41:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:15.567 06:41:08 -- common/autotest_common.sh@819 -- # '[' -z 97422 ']' 00:23:15.567 06:41:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:15.567 06:41:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:15.567 06:41:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:15.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:15.567 06:41:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:15.567 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:23:15.567 [2024-10-04 06:41:08.222448] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:15.567 [2024-10-04 06:41:08.222556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97422 ] 00:23:15.826 [2024-10-04 06:41:08.356864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.826 [2024-10-04 06:41:08.437033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.774 06:41:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:16.774 06:41:09 -- common/autotest_common.sh@852 -- # return 0 00:23:16.774 06:41:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:16.774 06:41:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:17.065 06:41:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:17.065 06:41:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.065 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:23:17.065 06:41:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.065 06:41:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.065 06:41:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.339 nvme0n1 00:23:17.339 06:41:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:17.339 06:41:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:17.339 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:23:17.339 06:41:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:17.339 06:41:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:17.339 06:41:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:17.339 Running I/O for 2 seconds... 00:23:17.339 [2024-10-04 06:41:09.935325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:17.339 [2024-10-04 06:41:09.936383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:09.936444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.339 [2024-10-04 06:41:09.945343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ea680 00:23:17.339 [2024-10-04 06:41:09.946013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:09.946077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:17.339 [2024-10-04 06:41:09.954991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de8a8 00:23:17.339 [2024-10-04 06:41:09.955412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:09.955450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:17.339 [2024-10-04 06:41:09.964618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ea248 00:23:17.339 [2024-10-04 06:41:09.965016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:09.965050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:17.339 [2024-10-04 06:41:09.974315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f0bc0 00:23:17.339 [2024-10-04 06:41:09.974660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:09.974695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:17.339 [2024-10-04 06:41:09.983910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e5220 00:23:17.339 [2024-10-04 06:41:09.984226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:09.984264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:17.339 [2024-10-04 06:41:09.993383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e84c0 00:23:17.339 [2024-10-04 06:41:09.993641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:09.993696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:17.339 [2024-10-04 06:41:10.003209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e9168 00:23:17.339 [2024-10-04 06:41:10.003478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.339 [2024-10-04 06:41:10.003506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.014164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e84c0 00:23:17.614 [2024-10-04 06:41:10.014387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.014408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.024575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e5220 00:23:17.614 [2024-10-04 06:41:10.024770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.024791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.037993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e49b0 00:23:17.614 [2024-10-04 06:41:10.039146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.039184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.045491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e73e0 00:23:17.614 [2024-10-04 06:41:10.045665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.045684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.057609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ee5c8 00:23:17.614 [2024-10-04 06:41:10.058861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.058918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.067642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e6fa8 00:23:17.614 [2024-10-04 06:41:10.068602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.068652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.077690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ea680 00:23:17.614 [2024-10-04 06:41:10.078880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.078923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.087847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ea248 00:23:17.614 [2024-10-04 06:41:10.089048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.089097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.097799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e3060 00:23:17.614 [2024-10-04 06:41:10.099056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.099091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.108510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de470 00:23:17.614 [2024-10-04 06:41:10.109083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.109117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.118291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e6300 00:23:17.614 [2024-10-04 06:41:10.118888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.118942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.127596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:17.614 [2024-10-04 06:41:10.128569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.128617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.136751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f57b0 00:23:17.614 [2024-10-04 06:41:10.137721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.137768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.146331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e3498 00:23:17.614 [2024-10-04 06:41:10.147087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.147121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.157749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e4578 00:23:17.614 [2024-10-04 06:41:10.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.158559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.167634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f96f8 00:23:17.614 [2024-10-04 06:41:10.168413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.168461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.176855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ee5c8 00:23:17.614 [2024-10-04 06:41:10.178331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.178378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.185884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fb480 00:23:17.614 [2024-10-04 06:41:10.186805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.186861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.197351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df550 00:23:17.614 [2024-10-04 06:41:10.198302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.198347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.206398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fe2e8 00:23:17.614 [2024-10-04 06:41:10.207735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-04 06:41:10.207789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:17.614 [2024-10-04 06:41:10.216189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f3a28 00:23:17.614 [2024-10-04 06:41:10.216865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.216936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.615 [2024-10-04 06:41:10.225280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f2510 00:23:17.615 [2024-10-04 06:41:10.226367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.226415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:17.615 [2024-10-04 06:41:10.234771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f46d0 00:23:17.615 [2024-10-04 06:41:10.236011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.236044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:17.615 [2024-10-04 06:41:10.246244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fac10 00:23:17.615 [2024-10-04 06:41:10.247152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.247184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:17.615 [2024-10-04 06:41:10.254733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ef6a8 00:23:17.615 [2024-10-04 06:41:10.255680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.255728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:17.615 [2024-10-04 06:41:10.264579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fc998 00:23:17.615 [2024-10-04 06:41:10.265946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.265978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:17.615 [2024-10-04 06:41:10.273934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e49b0 00:23:17.615 [2024-10-04 06:41:10.274360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.274393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:17.615 [2024-10-04 06:41:10.283247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ed0b0 00:23:17.615 [2024-10-04 06:41:10.284364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.615 [2024-10-04 06:41:10.284412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:17.874 [2024-10-04 06:41:10.293519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e95a0 00:23:17.874 [2024-10-04 06:41:10.294704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.874 [2024-10-04 06:41:10.294752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:17.874 [2024-10-04 06:41:10.303230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fac10 00:23:17.874 [2024-10-04 06:41:10.303613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.874 [2024-10-04 06:41:10.303648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:17.874 [2024-10-04 06:41:10.312321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f7da8 00:23:17.874 [2024-10-04 06:41:10.312836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.874 [2024-10-04 06:41:10.312876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:17.874 [2024-10-04 06:41:10.322660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebb98 00:23:17.874 [2024-10-04 06:41:10.324009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.874 [2024-10-04 06:41:10.324041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:17.874 [2024-10-04 06:41:10.332414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ecc78 00:23:17.874 [2024-10-04 06:41:10.332881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.332914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.342314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e8d30 00:23:17.875 [2024-10-04 06:41:10.343394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.343442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.352038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e4de8 00:23:17.875 [2024-10-04 06:41:10.353431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.353480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.362342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e38d0 00:23:17.875 [2024-10-04 06:41:10.364030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.364064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.371681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e99d8 00:23:17.875 [2024-10-04 06:41:10.372497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.372546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.380269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e1f80 00:23:17.875 [2024-10-04 06:41:10.380585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.380612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.391397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f81e0 00:23:17.875 [2024-10-04 06:41:10.392198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.392245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.401246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fe720 00:23:17.875 [2024-10-04 06:41:10.402104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.402150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.410589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fa7d8 00:23:17.875 [2024-10-04 06:41:10.411753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.411802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.419960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f4298 00:23:17.875 [2024-10-04 06:41:10.421144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.421175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.429572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f92c0 00:23:17.875 [2024-10-04 06:41:10.430707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.430754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.439238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eee38 00:23:17.875 [2024-10-04 06:41:10.440350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.440397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.448766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:17.875 [2024-10-04 06:41:10.449705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.449752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.460350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f8a50 00:23:17.875 [2024-10-04 06:41:10.461995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.462042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.469730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:17.875 [2024-10-04 06:41:10.470558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.470623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.478294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ec840 00:23:17.875 [2024-10-04 06:41:10.478643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.478673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.490601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de038 00:23:17.875 [2024-10-04 06:41:10.491572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.491619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.499249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f46d0 00:23:17.875 [2024-10-04 06:41:10.500229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.500276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.509332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f1868 00:23:17.875 [2024-10-04 06:41:10.509995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.510026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.518874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de470 00:23:17.875 [2024-10-04 06:41:10.520032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.520065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.529169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e9e10 00:23:17.875 [2024-10-04 06:41:10.529411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.529431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.539220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fa7d8 00:23:17.875 [2024-10-04 06:41:10.539690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.539724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:17.875 [2024-10-04 06:41:10.549379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e8d30 00:23:17.875 [2024-10-04 06:41:10.550252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.875 [2024-10-04 06:41:10.550300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.559254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ea248 00:23:18.135 [2024-10-04 06:41:10.560440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.560489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.568153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fa3a0 00:23:18.135 [2024-10-04 06:41:10.568248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.568266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.579865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eff18 00:23:18.135 [2024-10-04 06:41:10.580609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.580656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.589784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f5be8 00:23:18.135 [2024-10-04 06:41:10.590584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.590631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.599147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e27f0 00:23:18.135 [2024-10-04 06:41:10.600201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.600247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.609983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f20d8 00:23:18.135 [2024-10-04 06:41:10.611067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.611099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.618405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fe720 00:23:18.135 [2024-10-04 06:41:10.619600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.619649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.628448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f8a50 00:23:18.135 [2024-10-04 06:41:10.629060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.629107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.638150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f6020 00:23:18.135 [2024-10-04 06:41:10.638839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.638896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.647404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f3a28 00:23:18.135 [2024-10-04 06:41:10.648891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.648936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.656800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f4b08 00:23:18.135 [2024-10-04 06:41:10.657035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.657054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.668874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e99d8 00:23:18.135 [2024-10-04 06:41:10.670018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.670065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.675740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fa7d8 00:23:18.135 [2024-10-04 06:41:10.676822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.135 [2024-10-04 06:41:10.676887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:18.135 [2024-10-04 06:41:10.686683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:18.136 [2024-10-04 06:41:10.687140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.687174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.697397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f1ca0 00:23:18.136 [2024-10-04 06:41:10.698666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.698713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.706354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df988 00:23:18.136 [2024-10-04 06:41:10.707154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.707201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.715624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190dece0 00:23:18.136 [2024-10-04 06:41:10.716699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.716747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.724730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebfd0 00:23:18.136 [2024-10-04 06:41:10.725638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.725685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.736028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e7818 00:23:18.136 [2024-10-04 06:41:10.737694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.737742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.744857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f8e88 00:23:18.136 [2024-10-04 06:41:10.746082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.746114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.754689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f3a28 00:23:18.136 [2024-10-04 06:41:10.755196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.755229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.766506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f20d8 00:23:18.136 [2024-10-04 06:41:10.767623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.767668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.773717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fc998 00:23:18.136 [2024-10-04 06:41:10.773943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.773962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.785131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e8d30 00:23:18.136 [2024-10-04 06:41:10.786742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.786791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.794458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df988 00:23:18.136 [2024-10-04 06:41:10.795239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.795272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.136 [2024-10-04 06:41:10.803856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e0630 00:23:18.136 [2024-10-04 06:41:10.804862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.136 [2024-10-04 06:41:10.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.395 [2024-10-04 06:41:10.814598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f7538 00:23:18.395 [2024-10-04 06:41:10.815619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.815665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.821701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fb480 00:23:18.396 [2024-10-04 06:41:10.821828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.821855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.833215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e99d8 00:23:18.396 [2024-10-04 06:41:10.834519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.834566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.843148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e84c0 00:23:18.396 [2024-10-04 06:41:10.843931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.843963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.851449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df550 00:23:18.396 [2024-10-04 06:41:10.851761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.851786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.863387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f9b30 00:23:18.396 [2024-10-04 06:41:10.864292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.864338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.871878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ee190 00:23:18.396 [2024-10-04 06:41:10.872807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.872862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.881745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f7538 00:23:18.396 [2024-10-04 06:41:10.882356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.882397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.893435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e4578 00:23:18.396 [2024-10-04 06:41:10.894670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.894716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.900606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e9e10 00:23:18.396 [2024-10-04 06:41:10.901005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.901037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.911564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e5a90 00:23:18.396 [2024-10-04 06:41:10.912441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.912486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.921270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df118 00:23:18.396 [2024-10-04 06:41:10.922323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.922370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.929317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e5a90 00:23:18.396 [2024-10-04 06:41:10.930021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.930068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.938436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df988 00:23:18.396 [2024-10-04 06:41:10.938988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.939036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.948154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fb480 00:23:18.396 [2024-10-04 06:41:10.948897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.948929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.959593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e6fa8 00:23:18.396 [2024-10-04 06:41:10.960348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.960394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.968220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f2510 00:23:18.396 [2024-10-04 06:41:10.968781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.968825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.978964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f0bc0 00:23:18.396 [2024-10-04 06:41:10.980095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.980143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.988621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de470 00:23:18.396 [2024-10-04 06:41:10.990130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.990160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:10.998213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190feb58 00:23:18.396 [2024-10-04 06:41:10.999705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:10.999752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:11.007833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de8a8 00:23:18.396 [2024-10-04 06:41:11.009115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:11.009148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:11.016628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e23b8 00:23:18.396 [2024-10-04 06:41:11.017715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:11.017762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:11.026509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f6cc8 00:23:18.396 [2024-10-04 06:41:11.026925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:11.026957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:11.036279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eee38 00:23:18.396 [2024-10-04 06:41:11.036721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:11.036754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:11.046518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f0350 00:23:18.396 [2024-10-04 06:41:11.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:11.047648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:11.056534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fe720 00:23:18.396 [2024-10-04 06:41:11.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:11.057279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.396 [2024-10-04 06:41:11.066135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f20d8 00:23:18.396 [2024-10-04 06:41:11.066769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.396 [2024-10-04 06:41:11.066836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.075758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fac10 00:23:18.657 [2024-10-04 06:41:11.076459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.076522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.085324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e4de8 00:23:18.657 [2024-10-04 06:41:11.086036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.086083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.094886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df118 00:23:18.657 [2024-10-04 06:41:11.095540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.095604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.104156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df988 00:23:18.657 [2024-10-04 06:41:11.105084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.105116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.114795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e9e10 00:23:18.657 [2024-10-04 06:41:11.115474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.115546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.124467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fa7d8 00:23:18.657 [2024-10-04 06:41:11.125101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.125134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.134041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e23b8 00:23:18.657 [2024-10-04 06:41:11.134629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.134662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.143631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f1868 00:23:18.657 [2024-10-04 06:41:11.144256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.144319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.153381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e84c0 00:23:18.657 [2024-10-04 06:41:11.154063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.154095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.164241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e5220 00:23:18.657 [2024-10-04 06:41:11.165351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.165398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.172444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f7da8 00:23:18.657 [2024-10-04 06:41:11.172936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.172969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.181905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e1710 00:23:18.657 [2024-10-04 06:41:11.182397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.182441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.191490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fa7d8 00:23:18.657 [2024-10-04 06:41:11.192033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.192066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.201163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eaef0 00:23:18.657 [2024-10-04 06:41:11.201737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.201771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.210537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:18.657 [2024-10-04 06:41:11.211137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.211171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.220219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ef270 00:23:18.657 [2024-10-04 06:41:11.220984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.221016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.229703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e8088 00:23:18.657 [2024-10-04 06:41:11.230585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.657 [2024-10-04 06:41:11.230632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.657 [2024-10-04 06:41:11.240085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e3060 00:23:18.657 [2024-10-04 06:41:11.241429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.241475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.249889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f96f8 00:23:18.658 [2024-10-04 06:41:11.251245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.251280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.258987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:18.658 [2024-10-04 06:41:11.259525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.259558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.268693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eea00 00:23:18.658 [2024-10-04 06:41:11.269452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.269502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.278675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f4b08 00:23:18.658 [2024-10-04 06:41:11.279897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.279924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.288410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f4b08 00:23:18.658 [2024-10-04 06:41:11.289604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.289652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.298058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f4b08 00:23:18.658 [2024-10-04 06:41:11.299101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.299134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.307655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e23b8 00:23:18.658 [2024-10-04 06:41:11.308868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.308940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.318014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f0ff8 00:23:18.658 [2024-10-04 06:41:11.319399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.319457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.327884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ed0b0 00:23:18.658 [2024-10-04 06:41:11.329077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.658 [2024-10-04 06:41:11.329125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.658 [2024-10-04 06:41:11.335896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fac10 00:23:18.917 [2024-10-04 06:41:11.337150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.337198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.347213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e73e0 00:23:18.917 [2024-10-04 06:41:11.348030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.348076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.356212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eee38 00:23:18.917 [2024-10-04 06:41:11.357388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.365914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e2c28 00:23:18.917 [2024-10-04 06:41:11.366452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.366484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.376428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eee38 00:23:18.917 [2024-10-04 06:41:11.377154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.377201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.386067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f9f68 00:23:18.917 [2024-10-04 06:41:11.386779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.386839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.394467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e2c28 00:23:18.917 [2024-10-04 06:41:11.395294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.395351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.404603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ed920 00:23:18.917 [2024-10-04 06:41:11.406064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.406095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.414558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ea248 00:23:18.917 [2024-10-04 06:41:11.415289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.917 [2024-10-04 06:41:11.415354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.917 [2024-10-04 06:41:11.424557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fda78 00:23:18.917 [2024-10-04 06:41:11.425361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.425409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.434190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ea680 00:23:18.918 [2024-10-04 06:41:11.434933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.434981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.443788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f6cc8 00:23:18.918 [2024-10-04 06:41:11.444551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.444598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.453373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e8d30 00:23:18.918 [2024-10-04 06:41:11.454126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.454188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.462955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e6300 00:23:18.918 [2024-10-04 06:41:11.463754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.463829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.472533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ec408 00:23:18.918 [2024-10-04 06:41:11.473340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.473389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.481904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eb328 00:23:18.918 [2024-10-04 06:41:11.482854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.482891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.491482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebb98 00:23:18.918 [2024-10-04 06:41:11.492976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.493009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.501136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e38d0 00:23:18.918 [2024-10-04 06:41:11.502697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.502744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.509983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e23b8 00:23:18.918 [2024-10-04 06:41:11.510693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.510719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.520211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f6020 00:23:18.918 [2024-10-04 06:41:11.520702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.520736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.530251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e6b70 00:23:18.918 [2024-10-04 06:41:11.530884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.530931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.540419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebb98 00:23:18.918 [2024-10-04 06:41:11.541616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.541660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.549992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebb98 00:23:18.918 [2024-10-04 06:41:11.551046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.551078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.559456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fb048 00:23:18.918 [2024-10-04 06:41:11.559640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.559659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.570574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebb98 00:23:18.918 [2024-10-04 06:41:11.571936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.571967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.580115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ec840 00:23:18.918 [2024-10-04 06:41:11.580848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.580904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:18.918 [2024-10-04 06:41:11.588582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ec408 00:23:18.918 [2024-10-04 06:41:11.588806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.918 [2024-10-04 06:41:11.588854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.599035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e4de8 00:23:19.178 [2024-10-04 06:41:11.599425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.599458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.608961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e7c50 00:23:19.178 [2024-10-04 06:41:11.609739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.609787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.618568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ef270 00:23:19.178 [2024-10-04 06:41:11.619617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.619667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.629007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e8d30 00:23:19.178 [2024-10-04 06:41:11.629724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.629771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.637042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e6b70 00:23:19.178 [2024-10-04 06:41:11.638076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.638122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.647175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190dece0 00:23:19.178 [2024-10-04 06:41:11.647417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.647452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.656634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e1710 00:23:19.178 [2024-10-04 06:41:11.656889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.656907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.666436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e95a0 00:23:19.178 [2024-10-04 06:41:11.667620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.667670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.676906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f6458 00:23:19.178 [2024-10-04 06:41:11.677890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.677946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.686797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebb98 00:23:19.178 [2024-10-04 06:41:11.687353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.687385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.696596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ebfd0 00:23:19.178 [2024-10-04 06:41:11.697578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.697625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.706285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f3a28 00:23:19.178 [2024-10-04 06:41:11.706963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.706994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.715638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ff3c8 00:23:19.178 [2024-10-04 06:41:11.716289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.716353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.725132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190df550 00:23:19.178 [2024-10-04 06:41:11.725778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.725833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.734739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fb048 00:23:19.178 [2024-10-04 06:41:11.735334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.735370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.744522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ecc78 00:23:19.178 [2024-10-04 06:41:11.745108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.745139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.754120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f9b30 00:23:19.178 [2024-10-04 06:41:11.754661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.754694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.763789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e4578 00:23:19.178 [2024-10-04 06:41:11.764314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.764347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.773199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f31b8 00:23:19.178 [2024-10-04 06:41:11.774266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.774314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.782579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190ee190 00:23:19.178 [2024-10-04 06:41:11.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.793849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f0bc0 00:23:19.178 [2024-10-04 06:41:11.794660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.178 [2024-10-04 06:41:11.794707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:19.178 [2024-10-04 06:41:11.802206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f2510 00:23:19.179 [2024-10-04 06:41:11.803177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.179 [2024-10-04 06:41:11.803209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.179 [2024-10-04 06:41:11.813032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e5ec8 00:23:19.179 [2024-10-04 06:41:11.813837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.179 [2024-10-04 06:41:11.813870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:19.179 [2024-10-04 06:41:11.821510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f8a50 00:23:19.179 [2024-10-04 06:41:11.822189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.179 [2024-10-04 06:41:11.822252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:19.179 [2024-10-04 06:41:11.831154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fc128 00:23:19.179 [2024-10-04 06:41:11.831734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.179 [2024-10-04 06:41:11.831786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:19.179 [2024-10-04 06:41:11.840559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190fc128 00:23:19.179 [2024-10-04 06:41:11.841121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.179 [2024-10-04 06:41:11.841153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.179 [2024-10-04 06:41:11.850276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f8a50 00:23:19.179 [2024-10-04 06:41:11.850852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.179 [2024-10-04 06:41:11.850895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.859775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e49b0 00:23:19.438 [2024-10-04 06:41:11.860386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.860419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.869506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de470 00:23:19.438 [2024-10-04 06:41:11.870126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.870172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.878936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190de8a8 00:23:19.438 [2024-10-04 06:41:11.879591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.879653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.888482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190f1430 00:23:19.438 [2024-10-04 06:41:11.889105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.889153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.898476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e0ea0 00:23:19.438 [2024-10-04 06:41:11.899550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.899597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.908402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190eee38 00:23:19.438 [2024-10-04 06:41:11.908893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.908920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.920218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e1b48 00:23:19.438 [2024-10-04 06:41:11.921307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.921354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:19.438 [2024-10-04 06:41:11.927492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaea00) with pdu=0x2000190e0a68 00:23:19.438 [2024-10-04 06:41:11.927708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.438 [2024-10-04 06:41:11.927726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:19.438 00:23:19.438 Latency(us) 00:23:19.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.438 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:19.438 nvme0n1 : 2.00 26081.36 101.88 0.00 0.00 4903.08 1906.50 12153.95 00:23:19.438 =================================================================================================================== 00:23:19.438 Total : 26081.36 101.88 0.00 0.00 4903.08 1906.50 12153.95 00:23:19.438 0 00:23:19.438 06:41:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:19.438 06:41:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:19.438 06:41:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:19.438 | .driver_specific 00:23:19.438 | .nvme_error 00:23:19.438 | .status_code 00:23:19.438 | .command_transient_transport_error' 00:23:19.438 06:41:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:19.697 06:41:12 -- host/digest.sh@71 -- # (( 205 > 0 )) 00:23:19.697 06:41:12 -- host/digest.sh@73 -- # killprocess 97422 00:23:19.697 06:41:12 -- common/autotest_common.sh@926 -- # '[' -z 97422 ']' 00:23:19.697 06:41:12 -- common/autotest_common.sh@930 -- # kill -0 97422 00:23:19.697 06:41:12 -- common/autotest_common.sh@931 -- # uname 00:23:19.697 06:41:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:19.697 06:41:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97422 00:23:19.697 06:41:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:19.697 06:41:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:19.697 killing process with pid 97422 00:23:19.697 06:41:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97422' 00:23:19.697 Received shutdown signal, test time was about 2.000000 seconds 00:23:19.697 00:23:19.697 Latency(us) 00:23:19.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.697 =================================================================================================================== 00:23:19.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.697 06:41:12 -- common/autotest_common.sh@945 -- # kill 97422 00:23:19.697 06:41:12 -- common/autotest_common.sh@950 -- # wait 97422 00:23:19.956 06:41:12 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:19.956 06:41:12 -- host/digest.sh@54 -- # local rw bs qd 00:23:19.956 06:41:12 -- host/digest.sh@56 -- # rw=randwrite 00:23:19.956 06:41:12 -- host/digest.sh@56 -- # bs=131072 00:23:19.956 06:41:12 -- host/digest.sh@56 -- # qd=16 00:23:19.956 06:41:12 -- host/digest.sh@58 -- # bperfpid=97514 00:23:19.956 06:41:12 -- host/digest.sh@60 -- # waitforlisten 97514 /var/tmp/bperf.sock 00:23:19.956 06:41:12 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:19.956 06:41:12 -- common/autotest_common.sh@819 -- # '[' -z 97514 ']' 00:23:19.956 06:41:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:19.956 06:41:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:19.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:19.956 06:41:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:19.956 06:41:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:19.956 06:41:12 -- common/autotest_common.sh@10 -- # set +x 00:23:19.956 [2024-10-04 06:41:12.589573] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:19.956 [2024-10-04 06:41:12.589683] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97514 ] 00:23:19.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:19.956 Zero copy mechanism will not be used. 00:23:20.215 [2024-10-04 06:41:12.727156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.215 [2024-10-04 06:41:12.806577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.151 06:41:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:21.152 06:41:13 -- common/autotest_common.sh@852 -- # return 0 00:23:21.152 06:41:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:21.152 06:41:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:21.152 06:41:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:21.152 06:41:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.152 06:41:13 -- common/autotest_common.sh@10 -- # set +x 00:23:21.152 06:41:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.152 06:41:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.152 06:41:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.410 nvme0n1 00:23:21.670 06:41:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:21.670 06:41:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:21.670 06:41:14 -- common/autotest_common.sh@10 -- # set +x 00:23:21.670 06:41:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:21.670 06:41:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:21.670 06:41:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.670 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:21.670 Zero copy mechanism will not be used. 00:23:21.670 Running I/O for 2 seconds... 00:23:21.670 [2024-10-04 06:41:14.245426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.245694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.245765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.249718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.249874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.249913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.253795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.253933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.253956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.257793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.257927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.257948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.261757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.261879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.261899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.265786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.265910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.265931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.269774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.269913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.269934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.273835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.274044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.670 [2024-10-04 06:41:14.274065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.670 [2024-10-04 06:41:14.277753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.670 [2024-10-04 06:41:14.277962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.277982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.281741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.281906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.281927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.285642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.285761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.285782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.289569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.289668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.289688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.293475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.293570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.293590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.297492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.297614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.297635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.301521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.301649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.301669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.305560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.305762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.305783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.309552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.309732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.309761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.313557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.313682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.313703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.317592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.317692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.317711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.321602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.321720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.321740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.325528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.325633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.325654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.329540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.329665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.329687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.333526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.333652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.333672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.337627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.337828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.337861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.341636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.341867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.341888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.671 [2024-10-04 06:41:14.345606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.671 [2024-10-04 06:41:14.345731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.671 [2024-10-04 06:41:14.345751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.931 [2024-10-04 06:41:14.349544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.931 [2024-10-04 06:41:14.349645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.931 [2024-10-04 06:41:14.349665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.931 [2024-10-04 06:41:14.353535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.931 [2024-10-04 06:41:14.353643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.931 [2024-10-04 06:41:14.353664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.931 [2024-10-04 06:41:14.357533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.931 [2024-10-04 06:41:14.357640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.931 [2024-10-04 06:41:14.357660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.931 [2024-10-04 06:41:14.361566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.931 [2024-10-04 06:41:14.361691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.931 [2024-10-04 06:41:14.361710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.931 [2024-10-04 06:41:14.365628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.931 [2024-10-04 06:41:14.365751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.931 [2024-10-04 06:41:14.365772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.931 [2024-10-04 06:41:14.369680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.931 [2024-10-04 06:41:14.369899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.931 [2024-10-04 06:41:14.369919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.931 [2024-10-04 06:41:14.373555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.931 [2024-10-04 06:41:14.373780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.373800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.377621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.377744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.377764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.381546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.381646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.381666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.385517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.385617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.385637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.389563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.389662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.389681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.393522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.393642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.393662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.397493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.397620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.397639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.401465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.401667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.401687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.405463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.405644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.405663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.409396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.409520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.409540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.413336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.413445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.413466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.417273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.417369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.417389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.421115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.421217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.421236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.425086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.425194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.425214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.428998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.429125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.429145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.433014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.433235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.433255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.437037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.437232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.437251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.441010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.441134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.441153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.444949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.445064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.445084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.448824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.448948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.448968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.452882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.452979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.452999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.456770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.456914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.456935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.460701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.460828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.460860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.464768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.464988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.465009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.468722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.468946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.468967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.472686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.472833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.472867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.476601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.476718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.932 [2024-10-04 06:41:14.476738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.932 [2024-10-04 06:41:14.480499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.932 [2024-10-04 06:41:14.480604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.480624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.484431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.484545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.484566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.488370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.488493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.488513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.492331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.492458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.492479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.496252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.496455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.496476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.500205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.500414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.500435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.504257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.504379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.504400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.508143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.508236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.508256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.512051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.512134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.512154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.515947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.516027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.516046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.519970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.520079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.520099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.523914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.524025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.524045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.527934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.528149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.528175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.532000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.532182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.532233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.535978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.536089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.536109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.539989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.540081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.540100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.543955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.544040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.544059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.547761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.547900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.547920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.551812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.551959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.551979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.555733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.555860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.555892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.559763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.559992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.560012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.563739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.563949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.563969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.567763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.567932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.567951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.571699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.571806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.571825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.575616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.575742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.575762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.579472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.579595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.579614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.583570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.583697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.583717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.933 [2024-10-04 06:41:14.587561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.933 [2024-10-04 06:41:14.587688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.933 [2024-10-04 06:41:14.587708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.934 [2024-10-04 06:41:14.591618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.934 [2024-10-04 06:41:14.591822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.934 [2024-10-04 06:41:14.591842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.934 [2024-10-04 06:41:14.595614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.934 [2024-10-04 06:41:14.595837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.934 [2024-10-04 06:41:14.595858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.934 [2024-10-04 06:41:14.599608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.934 [2024-10-04 06:41:14.599733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.934 [2024-10-04 06:41:14.599753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.934 [2024-10-04 06:41:14.603549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.934 [2024-10-04 06:41:14.603649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.934 [2024-10-04 06:41:14.603669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.934 [2024-10-04 06:41:14.607528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:21.934 [2024-10-04 06:41:14.607652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.934 [2024-10-04 06:41:14.607672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.611409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.611523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.611543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.615402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.615522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.615542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.619378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.619518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.619539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.623519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.623723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.623743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.627519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.627699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.627720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.631489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.631624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.631645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.635475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.635584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.635604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.639458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.639575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.639596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.643341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.643459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.643479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.647253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.647360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.647381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.651155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.651267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.651288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.655193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.655402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.655454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.659081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.659269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.659289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.662959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.663094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.663116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.666909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.667019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.667038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.670738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.670833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.670853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.674639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.674750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.674769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.678555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.678678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.678698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.682475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.682602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.682623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.194 [2024-10-04 06:41:14.686496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.194 [2024-10-04 06:41:14.686698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.194 [2024-10-04 06:41:14.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.690362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.690597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.690634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.694305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.694446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.698315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.698434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.698454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.702142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.702242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.702262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.705934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.706034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.706054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.709812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.709952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.709972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.713771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.713918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.713938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.717745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.717969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.717990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.721628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.721920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.721945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.725582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.725696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.725716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.729474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.729578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.729598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.733447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.733559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.733579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.737325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.737440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.737460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.741199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.741323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.741343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.745301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.745429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.745449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.749283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.749487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.749507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.753286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.753494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.753514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.757305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.757448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.757468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.761203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.761321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.761340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.765126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.765244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.765265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.768938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.769047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.769069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.772870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.772977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.772998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.776780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.776918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.776938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.780741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.780966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.780986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.784719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.784930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.784950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.788753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.788900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.788920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.195 [2024-10-04 06:41:14.792676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.195 [2024-10-04 06:41:14.792779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.195 [2024-10-04 06:41:14.792799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.796634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.796738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.796758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.800514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.800609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.800629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.804552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.804682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.804703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.808526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.808654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.808675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.812562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.812764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.812784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.816472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.816654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.816674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.820441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.820584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.820604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.824402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.824521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.824541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.828377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.828497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.828517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.832361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.832455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.832475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.836365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.836499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.836520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.840252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.840378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.840398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.844156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.844341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.844382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.848041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.848207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.848227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.851935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.852043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.852064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.855786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.855916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.855937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.859726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.859848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.859879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.863572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.863675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.863695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.867592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.867714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.196 [2024-10-04 06:41:14.871491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.196 [2024-10-04 06:41:14.871618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.196 [2024-10-04 06:41:14.871639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.875570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.875769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.875789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.879504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.879730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.879750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.883484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.883611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.883631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.887479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.887597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.887617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.891464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.891571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.891592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.895409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.895506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.895526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.899265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.899378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.899399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.903156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.903271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.903291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.907088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.907273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.907293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.910874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.911110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.911130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.914739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.914885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.914904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.918593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.918689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.918709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.922548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.922666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.922685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.926418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.926531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.926551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.930348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.930487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.930507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.934258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.934382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.934403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.938267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.938468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.938488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.458 [2024-10-04 06:41:14.942098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.458 [2024-10-04 06:41:14.942267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.458 [2024-10-04 06:41:14.942287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.946043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.946166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.946187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.949877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.949957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.949976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.953766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.953876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.953896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.957629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.957725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.957746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.961497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.961620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.961640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.965482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.965607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.965627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.969482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.969685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.969705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.973344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.973528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.973547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.977416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.977549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.977569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.981307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.981423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.981443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.985251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.985359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.985380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.989139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.989253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.989274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.993077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.993203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.993222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:14.997032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:14.997157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:14.997177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.001054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.001260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.001281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.005039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.005215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.005250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.009012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.009138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.009159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.012872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.012981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.013001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.016825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.016936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.016955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.020692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.020824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.024607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.024731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.024751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.028562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.028689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.028709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.032686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.032901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.032921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.036630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.036821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.036841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.040649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.040775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.040795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.044586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.044681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.044701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.459 [2024-10-04 06:41:15.048570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.459 [2024-10-04 06:41:15.048664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.459 [2024-10-04 06:41:15.048685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.052500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.052610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.052631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.056441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.056563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.056584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.060349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.060475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.060495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.064295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.064496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.064515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.068165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.068374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.068393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.072070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.072209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.072230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.075967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.076086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.076107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.079806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.079931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.079952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.083604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.083725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.087609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.087733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.087753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.091643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.091770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.091791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.095670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.095884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.095905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.099639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.099833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.099854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.103618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.103741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.103762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.107544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.107646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.107666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.111425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.111534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.111555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.115330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.115445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.115464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.119179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.119301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.119321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.122973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.123110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.123130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.126888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.127083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.127119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.130622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.130845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.130877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.460 [2024-10-04 06:41:15.134608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.460 [2024-10-04 06:41:15.134751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.460 [2024-10-04 06:41:15.134770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.138511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.138614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.138634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.142366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.142462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.142483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.146265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.146366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.146385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.150206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.150332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.150351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.154123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.154249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.154269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.158121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.158323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.158343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.161997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.162203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.162223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.165995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.166139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.166159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.169922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.721 [2024-10-04 06:41:15.170028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.721 [2024-10-04 06:41:15.170047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.721 [2024-10-04 06:41:15.173785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.173902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.173923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.177773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.177881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.177901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.181683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.181805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.181824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.185719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.185845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.185877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.189688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.189907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.189927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.193626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.193808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.193828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.197604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.197728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.197748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.201572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.201681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.201703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.205503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.205610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.205632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.209454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.209557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.209577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.213397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.213520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.213540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.217296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.217422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.217442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.221300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.221502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.221523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.225233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.225439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.225459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.229200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.229325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.229345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.233134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.233230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.233250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.237111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.237198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.237218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.240997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.241090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.241109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.244813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.244950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.244970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.248692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.248820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.248852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.252652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.252871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.252891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.256720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.256945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.256965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.260802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.260997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.261017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.264780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.264901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.264921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.268745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.268884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.268904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.272573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.272676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.722 [2024-10-04 06:41:15.272695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.722 [2024-10-04 06:41:15.276499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.722 [2024-10-04 06:41:15.276621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.276642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.280414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.280542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.280562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.284383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.284585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.284605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.288343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.288544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.288564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.292280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.292424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.292444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.296337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.296450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.296470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.300344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.300459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.300479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.304262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.304358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.304377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.308165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.308302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.308323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.312378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.312514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.312534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.316341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.316541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.316562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.320279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.320484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.320504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.324228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.324355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.324374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.328272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.328386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.328406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.332278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.332376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.332396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.336115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.336238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.336258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.340058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.340189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.340210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.344181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.344309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.344330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.348287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.348495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.352302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.352510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.352530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.356281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.356404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.356424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.360337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.360437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.360457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.364325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.364439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.364458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.368301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.368396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.368416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.372173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.372283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.372303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.376129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.376239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.376259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.380130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.380323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.380380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.723 [2024-10-04 06:41:15.384030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.723 [2024-10-04 06:41:15.384243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.723 [2024-10-04 06:41:15.384279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.724 [2024-10-04 06:41:15.388056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.724 [2024-10-04 06:41:15.388165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.724 [2024-10-04 06:41:15.388185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.724 [2024-10-04 06:41:15.391939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.724 [2024-10-04 06:41:15.392037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.724 [2024-10-04 06:41:15.392058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.724 [2024-10-04 06:41:15.395827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.724 [2024-10-04 06:41:15.395946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.724 [2024-10-04 06:41:15.395967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.399737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.399833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.399864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.403803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.403943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.403962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.407728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.407856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.407887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.411753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.411987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.412009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.415618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.415800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.415820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.419754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.419913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.419934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.423677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.423794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.423814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.427541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.427648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.427668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.431452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.984 [2024-10-04 06:41:15.431576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.984 [2024-10-04 06:41:15.431597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.984 [2024-10-04 06:41:15.435444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.435599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.439464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.439591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.439611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.443458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.443666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.443685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.447369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.447606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.447641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.451408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.451533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.451554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.455209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.455310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.455329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.459072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.459162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.459182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.462900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.462983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.463003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.466704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.466828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.466848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.470681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.470807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.470827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.474699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.474913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.474934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.478563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.478783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.478803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.482482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.482605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.482626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.486311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.486419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.486439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.490326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.490426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.490446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.494210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.494327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.498043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.498175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.498196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.502027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.502155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.502174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.505981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.506183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.506203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.510028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.510232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.510252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.513998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.514122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.985 [2024-10-04 06:41:15.514142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.985 [2024-10-04 06:41:15.518023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.985 [2024-10-04 06:41:15.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.518139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.521976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.522092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.522112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.525836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.525933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.525953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.529871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.529995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.530018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.533857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.533984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.534004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.537799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.538014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.538034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.541801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.542019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.542039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.545757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.545905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.545925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.549756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.549888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.549908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.553668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.553765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.553784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.557488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.557584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.557604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.561362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.561486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.561506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.565287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.565414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.565434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.569397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.569596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.569617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.573336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.573542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.573561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.577278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.577409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.577428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.581284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.581380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.581400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.585178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.585280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.585301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.589004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.589092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.589112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.592944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.593060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.593080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.596793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.596940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.596960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.600765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.986 [2024-10-04 06:41:15.600987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.986 [2024-10-04 06:41:15.601007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.986 [2024-10-04 06:41:15.604750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.605003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.605024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.608649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.608818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.612610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.612709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.612728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.616511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.616619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.616639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.620510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.620607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.620627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.624459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.624592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.624612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.628406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.628533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.628553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.632455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.632657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.632677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.636408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.636589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.636609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.640310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.640432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.640453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.644240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.644356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.644376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.648112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.648199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.648219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.652030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.652123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.652142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.655853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.655970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.655989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.987 [2024-10-04 06:41:15.659706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:22.987 [2024-10-04 06:41:15.659832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.987 [2024-10-04 06:41:15.659864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.663662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.663897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.663917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.667613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.667796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.667816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.671492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.671620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.671640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.675366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.675475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.675495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.679204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.679287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.679308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.683000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.683130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.683150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.686887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.687054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.687075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.690833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.690959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.690979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.694768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.694981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.695001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.698617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.698845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.698880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.702566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.702690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.702710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.706431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.706538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.706557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.710353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.710448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.710468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.714175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.714277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.714296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.718095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.718219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.718239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.722097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.722223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.722243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.726042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.726241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.726261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.729944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.730139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.730158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.733959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.734084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.734104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.737808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.248 [2024-10-04 06:41:15.737917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.248 [2024-10-04 06:41:15.737936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.248 [2024-10-04 06:41:15.741659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.741769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.741789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.745698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.745813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.745843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.749633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.749753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.749772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.753601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.753729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.753749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.757525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.757727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.757747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.761406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.761630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.761650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.765244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.765371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.765391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.769142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.769228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.769247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.773003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.773097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.773117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.776818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.776936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.776956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.780804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.780940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.780961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.784703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.784830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.784862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.788705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.788921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.788941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.792714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.792957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.792977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.796681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.796823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.800659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.800756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.800775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.804559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.804675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.804694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.808507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.808616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.808636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.812494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.812616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.812636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.816459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.816597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.816617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.820547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.820749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.820769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.824547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.824784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.824867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.828546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.828670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.828691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.832610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.832716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.832736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.836481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.836596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.836616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.840423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.840519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.840538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.844383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.249 [2024-10-04 06:41:15.844515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.249 [2024-10-04 06:41:15.844535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.249 [2024-10-04 06:41:15.848302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.848428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.848448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.852375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.852597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.852618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.856291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.856506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.856526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.860233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.860372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.860392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.864088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.864206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.864225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.868094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.868206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.868226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.872059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.872155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.872175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.876017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.876141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.876160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.879898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.880029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.880048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.883885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.884092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.884112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.887844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.888083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.888140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.891953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.892106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.892126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.895954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.896067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.896088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.899926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.900039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.900059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.903843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.903954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.903974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.907742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.907876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.907896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.911654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.911782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.911802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.915656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.915874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.915894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.919636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.919818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.919839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.250 [2024-10-04 06:41:15.923615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.250 [2024-10-04 06:41:15.923742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.250 [2024-10-04 06:41:15.923762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.510 [2024-10-04 06:41:15.927566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.510 [2024-10-04 06:41:15.927666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.510 [2024-10-04 06:41:15.927687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.510 [2024-10-04 06:41:15.931541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.510 [2024-10-04 06:41:15.931682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.510 [2024-10-04 06:41:15.931702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.510 [2024-10-04 06:41:15.935594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.510 [2024-10-04 06:41:15.935705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.510 [2024-10-04 06:41:15.935725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.510 [2024-10-04 06:41:15.939613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.510 [2024-10-04 06:41:15.939737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.510 [2024-10-04 06:41:15.939758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.510 [2024-10-04 06:41:15.943586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.943714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.943734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.947611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.947821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.947842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.951596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.951801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.955605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.955731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.955751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.959615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.959714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.959734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.963551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.963647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.963666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.967537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.967634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.967653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.971501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.971633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.975493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.975620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.975640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.979555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.979757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.979777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.983473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.983649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.983669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.987469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.987595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.987615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.991440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.991555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.991574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.995396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.995506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.995526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:15.999240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:15.999328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:15.999348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.003067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.003186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.003206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.006860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.006986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.007006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.010759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.011076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.011097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.014574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.014799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.014819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.018506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.018637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.018657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.022410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.022517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.022537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.026257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.026373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.026393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.030169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.030268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.030288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.034174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.034284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.034303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.038078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.038203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.038223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.511 [2024-10-04 06:41:16.042111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.511 [2024-10-04 06:41:16.042300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.511 [2024-10-04 06:41:16.042360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.045992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.046163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.046198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.049916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.050021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.050042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.053758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.053892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.053912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.057622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.057720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.057741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.061474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.061572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.061591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.065332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.065460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.065480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.069321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.069446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.069467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.073261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.073471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.073491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.077110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.077325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.077345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.081054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.081197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.081217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.084935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.085046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.085066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.088870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.088948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.088967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.092841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.092947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.092967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.096709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.096843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.096865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.100705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.100832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.100853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.104751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.104970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.104990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.108646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.108892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.108912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.112637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.112764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.112784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.116635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.116743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.116763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.120581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.120698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.120718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.124464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.124576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.512 [2024-10-04 06:41:16.124596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.512 [2024-10-04 06:41:16.128476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.512 [2024-10-04 06:41:16.128601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.128620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.132464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.132592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.132612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.136472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.136682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.136702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.140518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.140737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.140757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.144475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.144601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.144621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.148433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.148535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.148554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.152365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.152470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.152490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.156257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.156365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.156384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.160229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.160368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.160387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.164166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.164295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.164315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.168161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.168362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.168382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.172060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.172328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.172368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.175904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.176077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.176098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.179777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.179897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.179917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.183685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.183794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.183814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.513 [2024-10-04 06:41:16.187538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.513 [2024-10-04 06:41:16.187646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.513 [2024-10-04 06:41:16.187666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.191561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.191713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.191733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.195492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.195627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.195647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.199450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.199652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.199671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.203315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.203541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.203562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.207258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.207434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.207454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.211131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.211216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.211235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.214950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.215067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.215087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.218802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.218911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.218931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.222622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.222770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.222789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.772 [2024-10-04 06:41:16.226518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.772 [2024-10-04 06:41:16.226639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.772 [2024-10-04 06:41:16.226659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.773 [2024-10-04 06:41:16.230567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.773 [2024-10-04 06:41:16.230766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.773 [2024-10-04 06:41:16.230786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.773 [2024-10-04 06:41:16.234499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfaeba0) with pdu=0x2000190fef90 00:23:23.773 [2024-10-04 06:41:16.234728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.773 [2024-10-04 06:41:16.234764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.773 00:23:23.773 Latency(us) 00:23:23.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.773 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:23.773 nvme0n1 : 2.00 7833.86 979.23 0.00 0.00 2038.18 1593.72 7745.16 00:23:23.773 =================================================================================================================== 00:23:23.773 Total : 7833.86 979.23 0.00 0.00 2038.18 1593.72 7745.16 00:23:23.773 0 00:23:23.773 06:41:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:23.773 06:41:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:23.773 06:41:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:23.773 06:41:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:23.773 | .driver_specific 00:23:23.773 | .nvme_error 00:23:23.773 | .status_code 00:23:23.773 | .command_transient_transport_error' 00:23:24.032 06:41:16 -- host/digest.sh@71 -- # (( 505 > 0 )) 00:23:24.032 06:41:16 -- host/digest.sh@73 -- # killprocess 97514 00:23:24.032 06:41:16 -- common/autotest_common.sh@926 -- # '[' -z 97514 ']' 00:23:24.032 06:41:16 -- common/autotest_common.sh@930 -- # kill -0 97514 00:23:24.032 06:41:16 -- common/autotest_common.sh@931 -- # uname 00:23:24.032 06:41:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:24.032 06:41:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97514 00:23:24.032 06:41:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:24.032 06:41:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:24.032 killing process with pid 97514 00:23:24.032 06:41:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97514' 00:23:24.032 Received shutdown signal, test time was about 2.000000 seconds 00:23:24.032 00:23:24.032 Latency(us) 00:23:24.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.032 =================================================================================================================== 00:23:24.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.032 06:41:16 -- common/autotest_common.sh@945 -- # kill 97514 00:23:24.032 06:41:16 -- common/autotest_common.sh@950 -- # wait 97514 00:23:24.290 06:41:16 -- host/digest.sh@115 -- # killprocess 97197 00:23:24.290 06:41:16 -- common/autotest_common.sh@926 -- # '[' -z 97197 ']' 00:23:24.290 06:41:16 -- common/autotest_common.sh@930 -- # kill -0 97197 00:23:24.290 06:41:16 -- common/autotest_common.sh@931 -- # uname 00:23:24.290 06:41:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:24.290 06:41:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97197 00:23:24.290 06:41:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:24.290 06:41:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:24.290 killing process with pid 97197 00:23:24.290 06:41:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97197' 00:23:24.290 06:41:16 -- common/autotest_common.sh@945 -- # kill 97197 00:23:24.290 06:41:16 -- common/autotest_common.sh@950 -- # wait 97197 00:23:24.549 00:23:24.549 real 0m18.950s 00:23:24.549 user 0m35.883s 00:23:24.549 sys 0m5.028s 00:23:24.549 06:41:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.549 ************************************ 00:23:24.549 END TEST nvmf_digest_error 00:23:24.549 ************************************ 00:23:24.549 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:24.549 06:41:17 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:24.549 06:41:17 -- host/digest.sh@139 -- # nvmftestfini 00:23:24.549 06:41:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:24.549 06:41:17 -- nvmf/common.sh@116 -- # sync 00:23:24.809 06:41:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:24.809 06:41:17 -- nvmf/common.sh@119 -- # set +e 00:23:24.809 06:41:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:24.809 06:41:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:24.809 rmmod nvme_tcp 00:23:24.809 rmmod nvme_fabrics 00:23:24.809 rmmod nvme_keyring 00:23:24.809 06:41:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:24.809 06:41:17 -- nvmf/common.sh@123 -- # set -e 00:23:24.809 06:41:17 -- nvmf/common.sh@124 -- # return 0 00:23:24.809 06:41:17 -- nvmf/common.sh@477 -- # '[' -n 97197 ']' 00:23:24.809 06:41:17 -- nvmf/common.sh@478 -- # killprocess 97197 00:23:24.809 06:41:17 -- common/autotest_common.sh@926 -- # '[' -z 97197 ']' 00:23:24.809 06:41:17 -- common/autotest_common.sh@930 -- # kill -0 97197 00:23:24.809 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (97197) - No such process 00:23:24.809 Process with pid 97197 is not found 00:23:24.809 06:41:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 97197 is not found' 00:23:24.809 06:41:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:24.809 06:41:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:24.809 06:41:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:24.809 06:41:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.809 06:41:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:24.809 06:41:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.809 06:41:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.809 06:41:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.809 06:41:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:24.809 00:23:24.809 real 0m37.157s 00:23:24.809 user 1m8.636s 00:23:24.809 sys 0m10.066s 00:23:24.809 06:41:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.809 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:24.809 ************************************ 00:23:24.809 END TEST nvmf_digest 00:23:24.809 ************************************ 00:23:24.809 06:41:17 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:24.809 06:41:17 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:24.809 06:41:17 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:24.809 06:41:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:24.809 06:41:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:24.809 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:24.809 ************************************ 00:23:24.809 START TEST nvmf_mdns_discovery 00:23:24.809 ************************************ 00:23:24.809 06:41:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:25.069 * Looking for test storage... 00:23:25.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:25.069 06:41:17 -- nvmf/common.sh@7 -- # uname -s 00:23:25.069 06:41:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.069 06:41:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.069 06:41:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.069 06:41:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.069 06:41:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.069 06:41:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.069 06:41:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.069 06:41:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.069 06:41:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.069 06:41:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.069 06:41:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:23:25.069 06:41:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:23:25.069 06:41:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.069 06:41:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.069 06:41:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:25.069 06:41:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:25.069 06:41:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.069 06:41:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.069 06:41:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.069 06:41:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.069 06:41:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.069 06:41:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.069 06:41:17 -- paths/export.sh@5 -- # export PATH 00:23:25.069 06:41:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.069 06:41:17 -- nvmf/common.sh@46 -- # : 0 00:23:25.069 06:41:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:25.069 06:41:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:25.069 06:41:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:25.069 06:41:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.069 06:41:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.069 06:41:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:25.069 06:41:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:25.069 06:41:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:25.069 06:41:17 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:25.069 06:41:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:25.069 06:41:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.069 06:41:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:25.069 06:41:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:25.069 06:41:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:25.069 06:41:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.069 06:41:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.069 06:41:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.069 06:41:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:25.069 06:41:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:25.069 06:41:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:25.069 06:41:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:25.069 06:41:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:25.069 06:41:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:25.069 06:41:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.069 06:41:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.069 06:41:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:25.069 06:41:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:25.069 06:41:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:25.069 06:41:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:25.069 06:41:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:25.069 06:41:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.069 06:41:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:25.069 06:41:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:25.069 06:41:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:25.069 06:41:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:25.069 06:41:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:25.069 06:41:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:25.069 Cannot find device "nvmf_tgt_br" 00:23:25.069 06:41:17 -- nvmf/common.sh@154 -- # true 00:23:25.069 06:41:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.069 Cannot find device "nvmf_tgt_br2" 00:23:25.069 06:41:17 -- nvmf/common.sh@155 -- # true 00:23:25.069 06:41:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:25.069 06:41:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:25.069 Cannot find device "nvmf_tgt_br" 00:23:25.069 06:41:17 -- nvmf/common.sh@157 -- # true 00:23:25.069 06:41:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:25.069 Cannot find device "nvmf_tgt_br2" 00:23:25.069 06:41:17 -- nvmf/common.sh@158 -- # true 00:23:25.069 06:41:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:25.069 06:41:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:25.070 06:41:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.070 06:41:17 -- nvmf/common.sh@161 -- # true 00:23:25.070 06:41:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:25.070 06:41:17 -- nvmf/common.sh@162 -- # true 00:23:25.070 06:41:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:25.070 06:41:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:25.070 06:41:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:25.070 06:41:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:25.070 06:41:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:25.070 06:41:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:25.329 06:41:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:25.329 06:41:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:25.329 06:41:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:25.329 06:41:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:25.329 06:41:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:25.329 06:41:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:25.329 06:41:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:25.329 06:41:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:25.329 06:41:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:25.329 06:41:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:25.329 06:41:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:25.329 06:41:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:25.329 06:41:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:25.329 06:41:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:25.329 06:41:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:25.329 06:41:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:25.329 06:41:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:25.329 06:41:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:25.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:23:25.329 00:23:25.329 --- 10.0.0.2 ping statistics --- 00:23:25.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.329 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:25.329 06:41:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:25.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:25.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:23:25.329 00:23:25.329 --- 10.0.0.3 ping statistics --- 00:23:25.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.329 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:25.329 06:41:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:25.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:25.329 00:23:25.329 --- 10.0.0.1 ping statistics --- 00:23:25.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.329 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:25.329 06:41:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.329 06:41:17 -- nvmf/common.sh@421 -- # return 0 00:23:25.330 06:41:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:25.330 06:41:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.330 06:41:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:25.330 06:41:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:25.330 06:41:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.330 06:41:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:25.330 06:41:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:25.330 06:41:17 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:25.330 06:41:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:25.330 06:41:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:25.330 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:25.330 06:41:17 -- nvmf/common.sh@469 -- # nvmfpid=97812 00:23:25.330 06:41:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:25.330 06:41:17 -- nvmf/common.sh@470 -- # waitforlisten 97812 00:23:25.330 06:41:17 -- common/autotest_common.sh@819 -- # '[' -z 97812 ']' 00:23:25.330 06:41:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.330 06:41:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:25.330 06:41:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.330 06:41:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:25.330 06:41:17 -- common/autotest_common.sh@10 -- # set +x 00:23:25.330 [2024-10-04 06:41:17.987077] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:25.330 [2024-10-04 06:41:17.987149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.589 [2024-10-04 06:41:18.117660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.589 [2024-10-04 06:41:18.194754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:25.589 [2024-10-04 06:41:18.194953] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.589 [2024-10-04 06:41:18.194968] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.589 [2024-10-04 06:41:18.194976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.589 [2024-10-04 06:41:18.195002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.589 06:41:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:25.589 06:41:18 -- common/autotest_common.sh@852 -- # return 0 00:23:25.589 06:41:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:25.590 06:41:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:25.590 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 06:41:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 [2024-10-04 06:41:18.449749] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 [2024-10-04 06:41:18.461935] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 null0 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 null1 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 null2 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 null3 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:25.850 06:41:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:25.850 06:41:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@47 -- # hostpid=97844 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:25.850 06:41:18 -- host/mdns_discovery.sh@48 -- # waitforlisten 97844 /tmp/host.sock 00:23:25.850 06:41:18 -- common/autotest_common.sh@819 -- # '[' -z 97844 ']' 00:23:25.850 06:41:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:23:25.850 06:41:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:25.850 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:25.850 06:41:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:25.850 06:41:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:25.850 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 [2024-10-04 06:41:18.560199] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:26.108 [2024-10-04 06:41:18.560317] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97844 ] 00:23:26.108 [2024-10-04 06:41:18.696106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.108 [2024-10-04 06:41:18.777702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:26.108 [2024-10-04 06:41:18.777938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.045 06:41:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:27.045 06:41:19 -- common/autotest_common.sh@852 -- # return 0 00:23:27.045 06:41:19 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:27.045 06:41:19 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:27.045 06:41:19 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:27.045 06:41:19 -- host/mdns_discovery.sh@57 -- # avahipid=97880 00:23:27.045 06:41:19 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:27.045 06:41:19 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:27.045 06:41:19 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:27.045 Process 1065 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:27.045 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:27.045 Successfully dropped root privileges. 00:23:27.045 avahi-daemon 0.8 starting up. 00:23:27.045 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:27.045 Successfully called chroot(). 00:23:27.045 Successfully dropped remaining capabilities. 00:23:27.045 No service file found in /etc/avahi/services. 00:23:27.045 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:27.045 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:27.045 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:27.045 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:27.045 Network interface enumeration completed. 00:23:27.045 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:27.045 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:27.045 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:27.045 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:27.981 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3852326627. 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:28.239 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.239 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.239 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:28.239 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.239 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.239 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@68 -- # sort 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@68 -- # xargs 00:23:28.239 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.239 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.239 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@64 -- # sort 00:23:28.239 06:41:20 -- host/mdns_discovery.sh@64 -- # xargs 00:23:28.239 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.239 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:28.240 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.240 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.240 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.240 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@68 -- # sort 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@68 -- # xargs 00:23:28.240 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:28.240 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.240 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@64 -- # sort 00:23:28.240 06:41:20 -- host/mdns_discovery.sh@64 -- # xargs 00:23:28.240 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:20 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:28.499 06:41:20 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:28.499 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:20 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:28.499 06:41:20 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.499 06:41:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 06:41:20 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:28.499 06:41:20 -- host/mdns_discovery.sh@68 -- # sort 00:23:28.499 06:41:20 -- host/mdns_discovery.sh@68 -- # xargs 00:23:28.499 06:41:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@64 -- # sort 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@64 -- # xargs 00:23:28.499 [2024-10-04 06:41:21.030304] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 [2024-10-04 06:41:21.086432] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 [2024-10-04 06:41:21.126364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:28.499 06:41:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:28.499 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:23:28.499 [2024-10-04 06:41:21.134363] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:28.499 06:41:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=97931 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:28.499 06:41:21 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:29.434 [2024-10-04 06:41:21.930304] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:29.434 Established under name 'CDC' 00:23:29.692 [2024-10-04 06:41:22.330315] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:29.692 [2024-10-04 06:41:22.330336] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:29.692 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:29.692 cookie is 0 00:23:29.692 is_local: 1 00:23:29.692 our_own: 0 00:23:29.692 wide_area: 0 00:23:29.692 multicast: 1 00:23:29.692 cached: 1 00:23:29.949 [2024-10-04 06:41:22.430308] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:29.949 [2024-10-04 06:41:22.430327] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:29.949 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:29.949 cookie is 0 00:23:29.949 is_local: 1 00:23:29.949 our_own: 0 00:23:29.949 wide_area: 0 00:23:29.949 multicast: 1 00:23:29.949 cached: 1 00:23:30.883 [2024-10-04 06:41:23.334931] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:30.883 [2024-10-04 06:41:23.334959] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:30.883 [2024-10-04 06:41:23.334976] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:30.883 [2024-10-04 06:41:23.421040] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:30.883 [2024-10-04 06:41:23.434700] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:30.883 [2024-10-04 06:41:23.434720] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:30.883 [2024-10-04 06:41:23.434735] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.883 [2024-10-04 06:41:23.479345] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:30.883 [2024-10-04 06:41:23.479371] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:30.883 [2024-10-04 06:41:23.522490] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:31.141 [2024-10-04 06:41:23.584086] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:31.141 [2024-10-04 06:41:23.584110] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:33.676 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.676 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@80 -- # sort 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@80 -- # xargs 00:23:33.676 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:33.676 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@76 -- # sort 00:23:33.676 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@76 -- # xargs 00:23:33.676 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:33.676 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:33.676 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@68 -- # xargs 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@68 -- # sort 00:23:33.676 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.676 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.676 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:33.676 06:41:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:33.676 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.991 06:41:26 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:33.992 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:33.992 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # xargs 00:23:33.992 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:33.992 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.992 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@72 -- # xargs 00:23:33.992 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:33.992 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.992 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:33.992 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:33.992 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.992 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:33.992 06:41:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:33.992 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 06:41:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:33.992 06:41:26 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:34.941 06:41:27 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:34.941 06:41:27 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.941 06:41:27 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:34.941 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.941 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:23:34.941 06:41:27 -- host/mdns_discovery.sh@64 -- # sort 00:23:34.941 06:41:27 -- host/mdns_discovery.sh@64 -- # xargs 00:23:34.941 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.941 06:41:27 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:34.941 06:41:27 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:35.200 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:35.200 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:35.200 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:35.200 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:35.200 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:23:35.200 [2024-10-04 06:41:27.676863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.200 [2024-10-04 06:41:27.677517] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:35.200 [2024-10-04 06:41:27.677703] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.200 [2024-10-04 06:41:27.677884] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:35.200 [2024-10-04 06:41:27.677997] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:35.200 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:35.200 06:41:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:35.200 06:41:27 -- common/autotest_common.sh@10 -- # set +x 00:23:35.200 [2024-10-04 06:41:27.684892] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:35.200 [2024-10-04 06:41:27.685526] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:35.200 [2024-10-04 06:41:27.685744] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:35.200 06:41:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:35.200 06:41:27 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:35.201 [2024-10-04 06:41:27.815604] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:35.201 [2024-10-04 06:41:27.816601] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:35.201 [2024-10-04 06:41:27.877767] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:35.201 [2024-10-04 06:41:27.877788] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:35.201 [2024-10-04 06:41:27.877793] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:35.201 [2024-10-04 06:41:27.877807] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:35.201 [2024-10-04 06:41:27.877952] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:35.201 [2024-10-04 06:41:27.877963] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:35.201 [2024-10-04 06:41:27.877968] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:35.201 [2024-10-04 06:41:27.877981] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:35.459 [2024-10-04 06:41:27.923702] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:35.459 [2024-10-04 06:41:27.923873] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:35.459 [2024-10-04 06:41:27.923931] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:35.459 [2024-10-04 06:41:27.923940] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:36.026 06:41:28 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:36.026 06:41:28 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:36.026 06:41:28 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:36.026 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.026 06:41:28 -- host/mdns_discovery.sh@68 -- # sort 00:23:36.026 06:41:28 -- host/mdns_discovery.sh@68 -- # xargs 00:23:36.026 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:36.284 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@64 -- # sort 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@64 -- # xargs 00:23:36.284 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.284 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:36.284 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:36.284 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:36.284 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # xargs 00:23:36.284 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:36.284 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.284 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@72 -- # xargs 00:23:36.284 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:36.284 06:41:28 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:36.284 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.284 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:36.284 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.544 06:41:28 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:36.544 06:41:28 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:36.544 06:41:28 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:36.544 06:41:28 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.544 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.544 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:36.544 [2024-10-04 06:41:28.993676] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:36.544 [2024-10-04 06:41:28.993886] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.544 [2024-10-04 06:41:28.994073] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:36.544 [2024-10-04 06:41:28.994182] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:36.544 [2024-10-04 06:41:28.994366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.544 [2024-10-04 06:41:28.994531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.544 [2024-10-04 06:41:28.994592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.544 [2024-10-04 06:41:28.994698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.544 [2024-10-04 06:41:28.994745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.544 [2024-10-04 06:41:28.994891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.544 [2024-10-04 06:41:28.994943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.544 [2024-10-04 06:41:28.995114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.544 [2024-10-04 06:41:28.995244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.544 06:41:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.544 06:41:28 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:36.544 06:41:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:36.544 06:41:28 -- common/autotest_common.sh@10 -- # set +x 00:23:36.544 [2024-10-04 06:41:29.002453] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:36.544 [2024-10-04 06:41:29.002644] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:36.544 [2024-10-04 06:41:29.004323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.545 06:41:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:36.545 06:41:29 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:36.545 [2024-10-04 06:41:29.008270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.545 [2024-10-04 06:41:29.008453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.545 [2024-10-04 06:41:29.008565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.545 [2024-10-04 06:41:29.008751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.545 [2024-10-04 06:41:29.008804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.545 [2024-10-04 06:41:29.008943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.545 [2024-10-04 06:41:29.008958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.545 [2024-10-04 06:41:29.008967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.545 [2024-10-04 06:41:29.008976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.014350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.545 [2024-10-04 06:41:29.014435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.014475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.014490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.545 [2024-10-04 06:41:29.014499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.014513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.014526] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.014534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.014549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.545 [2024-10-04 06:41:29.014569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.018241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.024396] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.545 [2024-10-04 06:41:29.024462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.024499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.024513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.545 [2024-10-04 06:41:29.024523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.024536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.024548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.024555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.024562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.545 [2024-10-04 06:41:29.024574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.028250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.545 [2024-10-04 06:41:29.028315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.028353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.028366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.545 [2024-10-04 06:41:29.028375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.028389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.028400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.028408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.028415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.545 [2024-10-04 06:41:29.028427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.034437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.545 [2024-10-04 06:41:29.034500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.034537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.034550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.545 [2024-10-04 06:41:29.034560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.034573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.034584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.034592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.034599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.545 [2024-10-04 06:41:29.034611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.038291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.545 [2024-10-04 06:41:29.038355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.038392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.038405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.545 [2024-10-04 06:41:29.038415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.038428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.038439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.038447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.038454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.545 [2024-10-04 06:41:29.038466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.044478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.545 [2024-10-04 06:41:29.044541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.044577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.044590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.545 [2024-10-04 06:41:29.044599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.044612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.044624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.044631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.044639] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.545 [2024-10-04 06:41:29.044650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.048333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.545 [2024-10-04 06:41:29.048403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.048442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.048455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.545 [2024-10-04 06:41:29.048464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.048478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.048491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.048498] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.048505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.545 [2024-10-04 06:41:29.048518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.054518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.545 [2024-10-04 06:41:29.054583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.054620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.545 [2024-10-04 06:41:29.054633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.545 [2024-10-04 06:41:29.054642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.545 [2024-10-04 06:41:29.054656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.545 [2024-10-04 06:41:29.054668] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.545 [2024-10-04 06:41:29.054675] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.545 [2024-10-04 06:41:29.054682] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.545 [2024-10-04 06:41:29.054694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.545 [2024-10-04 06:41:29.058377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.546 [2024-10-04 06:41:29.058440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.058477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.058490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.546 [2024-10-04 06:41:29.058498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.058511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.058523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.058531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.058546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.546 [2024-10-04 06:41:29.058557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.064559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.546 [2024-10-04 06:41:29.064620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.064657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.064669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.546 [2024-10-04 06:41:29.064678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.064691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.064702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.064709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.064717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.546 [2024-10-04 06:41:29.064728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.068427] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.546 [2024-10-04 06:41:29.068492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.068530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.068544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.546 [2024-10-04 06:41:29.068554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.068567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.068580] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.068587] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.068595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.546 [2024-10-04 06:41:29.068608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.074599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.546 [2024-10-04 06:41:29.074660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.074696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.074709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.546 [2024-10-04 06:41:29.074718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.074730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.074742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.074750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.074757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.546 [2024-10-04 06:41:29.074769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.078468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.546 [2024-10-04 06:41:29.078531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.078568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.078581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.546 [2024-10-04 06:41:29.078590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.078602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.078615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.078622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.078630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.546 [2024-10-04 06:41:29.078641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.084638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.546 [2024-10-04 06:41:29.084700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.084736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.084749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.546 [2024-10-04 06:41:29.084758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.084771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.084783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.084790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.084797] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.546 [2024-10-04 06:41:29.084809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.088508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.546 [2024-10-04 06:41:29.088576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.088614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.088628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.546 [2024-10-04 06:41:29.088637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.088649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.088662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.088669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.088676] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.546 [2024-10-04 06:41:29.088688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.094679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.546 [2024-10-04 06:41:29.094747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.094785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.094798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.546 [2024-10-04 06:41:29.094807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.094835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.094865] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.094873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.094881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.546 [2024-10-04 06:41:29.094893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.098552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.546 [2024-10-04 06:41:29.098615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.098652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.098666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.546 [2024-10-04 06:41:29.098675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.546 [2024-10-04 06:41:29.098689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.546 [2024-10-04 06:41:29.098701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.546 [2024-10-04 06:41:29.098708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.546 [2024-10-04 06:41:29.098716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.546 [2024-10-04 06:41:29.098728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.546 [2024-10-04 06:41:29.104723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.546 [2024-10-04 06:41:29.104785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.104834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.546 [2024-10-04 06:41:29.104850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.546 [2024-10-04 06:41:29.104859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.547 [2024-10-04 06:41:29.104872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.547 [2024-10-04 06:41:29.104884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.547 [2024-10-04 06:41:29.104891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.547 [2024-10-04 06:41:29.104900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.547 [2024-10-04 06:41:29.104912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.547 [2024-10-04 06:41:29.108591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.547 [2024-10-04 06:41:29.108654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.108692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.108705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.547 [2024-10-04 06:41:29.108714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.547 [2024-10-04 06:41:29.108727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.547 [2024-10-04 06:41:29.108739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.547 [2024-10-04 06:41:29.108746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.547 [2024-10-04 06:41:29.108754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.547 [2024-10-04 06:41:29.108766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.547 [2024-10-04 06:41:29.114763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.547 [2024-10-04 06:41:29.114836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.114874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.114887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.547 [2024-10-04 06:41:29.114896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.547 [2024-10-04 06:41:29.114910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.547 [2024-10-04 06:41:29.114922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.547 [2024-10-04 06:41:29.114929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.547 [2024-10-04 06:41:29.114937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.547 [2024-10-04 06:41:29.114949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.547 [2024-10-04 06:41:29.118632] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.547 [2024-10-04 06:41:29.118713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.118751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.118764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.547 [2024-10-04 06:41:29.118774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.547 [2024-10-04 06:41:29.118788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.547 [2024-10-04 06:41:29.118800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.547 [2024-10-04 06:41:29.118808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.547 [2024-10-04 06:41:29.118827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.547 [2024-10-04 06:41:29.118842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.547 [2024-10-04 06:41:29.124802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.547 [2024-10-04 06:41:29.124889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.124927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.124940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548f50 with addr=10.0.0.2, port=4420 00:23:36.547 [2024-10-04 06:41:29.124949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548f50 is same with the state(5) to be set 00:23:36.547 [2024-10-04 06:41:29.124963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548f50 (9): Bad file descriptor 00:23:36.547 [2024-10-04 06:41:29.124975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.547 [2024-10-04 06:41:29.124982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:36.547 [2024-10-04 06:41:29.124990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.547 [2024-10-04 06:41:29.125002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.547 [2024-10-04 06:41:29.128689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:36.547 [2024-10-04 06:41:29.128750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.128786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.547 [2024-10-04 06:41:29.128799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f61f0 with addr=10.0.0.3, port=4420 00:23:36.547 [2024-10-04 06:41:29.128809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f61f0 is same with the state(5) to be set 00:23:36.547 [2024-10-04 06:41:29.128848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f61f0 (9): Bad file descriptor 00:23:36.547 [2024-10-04 06:41:29.128863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:36.547 [2024-10-04 06:41:29.128871] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:36.547 [2024-10-04 06:41:29.128879] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:36.547 [2024-10-04 06:41:29.128892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.547 [2024-10-04 06:41:29.132139] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:36.547 [2024-10-04 06:41:29.132164] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:36.547 [2024-10-04 06:41:29.132181] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:36.547 [2024-10-04 06:41:29.133133] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:36.547 [2024-10-04 06:41:29.133155] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:36.547 [2024-10-04 06:41:29.133169] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.547 [2024-10-04 06:41:29.218212] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:36.547 [2024-10-04 06:41:29.219205] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:37.482 06:41:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:37.482 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@68 -- # sort 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@68 -- # xargs 00:23:37.482 06:41:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@64 -- # sort 00:23:37.482 06:41:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@64 -- # xargs 00:23:37.482 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.482 06:41:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:37.482 06:41:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:37.482 06:41:30 -- host/mdns_discovery.sh@72 -- # xargs 00:23:37.482 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.482 06:41:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@72 -- # xargs 00:23:37.740 06:41:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:37.740 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.740 06:41:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:37.740 06:41:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:37.740 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.740 06:41:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:37.740 06:41:30 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:37.741 06:41:30 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:37.741 06:41:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:37.741 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.741 06:41:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:37.741 06:41:30 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:37.741 [2024-10-04 06:41:30.330310] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:38.676 06:41:31 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:38.676 06:41:31 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:38.676 06:41:31 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:38.676 06:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:38.676 06:41:31 -- host/mdns_discovery.sh@80 -- # sort 00:23:38.676 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.676 06:41:31 -- host/mdns_discovery.sh@80 -- # xargs 00:23:38.676 06:41:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:38.934 06:41:31 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:38.934 06:41:31 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:38.934 06:41:31 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.934 06:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:38.934 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.934 06:41:31 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@68 -- # sort 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@68 -- # xargs 00:23:38.935 06:41:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:38.935 06:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:38.935 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@64 -- # sort 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@64 -- # xargs 00:23:38.935 06:41:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:38.935 06:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:38.935 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.935 06:41:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:38.935 06:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:38.935 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.935 06:41:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:38.935 06:41:31 -- common/autotest_common.sh@640 -- # local es=0 00:23:38.935 06:41:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:38.935 06:41:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:38.935 06:41:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:38.935 06:41:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:38.935 06:41:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:38.935 06:41:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:38.935 06:41:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:38.935 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:23:38.935 [2024-10-04 06:41:31.536176] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:38.935 2024/10/04 06:41:31 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:38.935 request: 00:23:38.935 { 00:23:38.935 "method": "bdev_nvme_start_mdns_discovery", 00:23:38.935 "params": { 00:23:38.935 "name": "mdns", 00:23:38.935 "svcname": "_nvme-disc._http", 00:23:38.935 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:38.935 } 00:23:38.935 } 00:23:38.935 Got JSON-RPC error response 00:23:38.935 GoRPCClient: error on JSON-RPC call 00:23:38.935 06:41:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:38.935 06:41:31 -- common/autotest_common.sh@643 -- # es=1 00:23:38.935 06:41:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:38.935 06:41:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:38.935 06:41:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:38.935 06:41:31 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:39.502 [2024-10-04 06:41:31.920695] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:39.502 [2024-10-04 06:41:32.020692] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:39.502 [2024-10-04 06:41:32.120697] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:39.502 [2024-10-04 06:41:32.120714] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:39.502 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:39.502 cookie is 0 00:23:39.502 is_local: 1 00:23:39.502 our_own: 0 00:23:39.502 wide_area: 0 00:23:39.502 multicast: 1 00:23:39.502 cached: 1 00:23:39.760 [2024-10-04 06:41:32.220699] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:39.760 [2024-10-04 06:41:32.220722] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:39.760 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:39.760 cookie is 0 00:23:39.760 is_local: 1 00:23:39.760 our_own: 0 00:23:39.760 wide_area: 0 00:23:39.760 multicast: 1 00:23:39.760 cached: 1 00:23:40.696 [2024-10-04 06:41:33.126141] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:40.696 [2024-10-04 06:41:33.126164] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:40.696 [2024-10-04 06:41:33.126180] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:40.696 [2024-10-04 06:41:33.212251] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:40.696 [2024-10-04 06:41:33.226068] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:40.696 [2024-10-04 06:41:33.226087] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:40.696 [2024-10-04 06:41:33.226101] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.696 [2024-10-04 06:41:33.274104] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:40.696 [2024-10-04 06:41:33.274129] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:40.696 [2024-10-04 06:41:33.312262] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:40.696 [2024-10-04 06:41:33.370898] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:40.696 [2024-10-04 06:41:33.370923] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:43.982 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.982 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@80 -- # sort 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@80 -- # xargs 00:23:43.982 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@76 -- # sort 00:23:43.982 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.982 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@76 -- # xargs 00:23:43.982 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.982 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.982 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@64 -- # sort 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:43.982 06:41:36 -- host/mdns_discovery.sh@64 -- # xargs 00:23:44.241 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:44.241 06:41:36 -- common/autotest_common.sh@640 -- # local es=0 00:23:44.241 06:41:36 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:44.241 06:41:36 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:44.241 06:41:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:44.241 06:41:36 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:44.241 06:41:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:44.241 06:41:36 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:44.241 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.241 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:23:44.241 [2024-10-04 06:41:36.720418] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:44.241 2024/10/04 06:41:36 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:44.241 request: 00:23:44.241 { 00:23:44.241 "method": "bdev_nvme_start_mdns_discovery", 00:23:44.241 "params": { 00:23:44.241 "name": "cdc", 00:23:44.241 "svcname": "_nvme-disc._tcp", 00:23:44.241 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:44.241 } 00:23:44.241 } 00:23:44.241 Got JSON-RPC error response 00:23:44.241 GoRPCClient: error on JSON-RPC call 00:23:44.241 06:41:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:44.241 06:41:36 -- common/autotest_common.sh@643 -- # es=1 00:23:44.241 06:41:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:44.241 06:41:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:44.241 06:41:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:44.241 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.241 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@76 -- # sort 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@76 -- # xargs 00:23:44.241 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.241 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:44.241 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@64 -- # sort 00:23:44.241 06:41:36 -- host/mdns_discovery.sh@64 -- # xargs 00:23:44.241 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.242 06:41:36 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:44.242 06:41:36 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:44.242 06:41:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:44.242 06:41:36 -- common/autotest_common.sh@10 -- # set +x 00:23:44.242 06:41:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:44.242 06:41:36 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:44.242 06:41:36 -- host/mdns_discovery.sh@197 -- # kill 97844 00:23:44.242 06:41:36 -- host/mdns_discovery.sh@200 -- # wait 97844 00:23:44.500 [2024-10-04 06:41:36.985346] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:44.500 06:41:37 -- host/mdns_discovery.sh@201 -- # kill 97931 00:23:44.500 Got SIGTERM, quitting. 00:23:44.500 06:41:37 -- host/mdns_discovery.sh@202 -- # kill 97880 00:23:44.500 06:41:37 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:44.500 06:41:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:44.500 06:41:37 -- nvmf/common.sh@116 -- # sync 00:23:44.500 Got SIGTERM, quitting. 00:23:44.500 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:44.500 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:44.500 avahi-daemon 0.8 exiting. 00:23:44.500 06:41:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:44.500 06:41:37 -- nvmf/common.sh@119 -- # set +e 00:23:44.500 06:41:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:44.500 06:41:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:44.500 rmmod nvme_tcp 00:23:44.500 rmmod nvme_fabrics 00:23:44.500 rmmod nvme_keyring 00:23:44.500 06:41:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:44.500 06:41:37 -- nvmf/common.sh@123 -- # set -e 00:23:44.500 06:41:37 -- nvmf/common.sh@124 -- # return 0 00:23:44.500 06:41:37 -- nvmf/common.sh@477 -- # '[' -n 97812 ']' 00:23:44.500 06:41:37 -- nvmf/common.sh@478 -- # killprocess 97812 00:23:44.500 06:41:37 -- common/autotest_common.sh@926 -- # '[' -z 97812 ']' 00:23:44.500 06:41:37 -- common/autotest_common.sh@930 -- # kill -0 97812 00:23:44.501 06:41:37 -- common/autotest_common.sh@931 -- # uname 00:23:44.759 06:41:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:44.759 06:41:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97812 00:23:44.759 killing process with pid 97812 00:23:44.759 06:41:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:44.759 06:41:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:44.759 06:41:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97812' 00:23:44.759 06:41:37 -- common/autotest_common.sh@945 -- # kill 97812 00:23:44.759 06:41:37 -- common/autotest_common.sh@950 -- # wait 97812 00:23:45.018 06:41:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:45.018 06:41:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:45.018 06:41:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:45.018 06:41:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.018 06:41:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:45.018 06:41:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.018 06:41:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.018 06:41:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.018 06:41:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:45.018 00:23:45.018 real 0m20.090s 00:23:45.018 user 0m39.821s 00:23:45.018 sys 0m1.963s 00:23:45.018 06:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.018 06:41:37 -- common/autotest_common.sh@10 -- # set +x 00:23:45.018 ************************************ 00:23:45.018 END TEST nvmf_mdns_discovery 00:23:45.018 ************************************ 00:23:45.018 06:41:37 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:45.018 06:41:37 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:45.018 06:41:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:45.018 06:41:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:45.018 06:41:37 -- common/autotest_common.sh@10 -- # set +x 00:23:45.018 ************************************ 00:23:45.018 START TEST nvmf_multipath 00:23:45.018 ************************************ 00:23:45.018 06:41:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:45.018 * Looking for test storage... 00:23:45.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:45.018 06:41:37 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:45.018 06:41:37 -- nvmf/common.sh@7 -- # uname -s 00:23:45.018 06:41:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.018 06:41:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.018 06:41:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.018 06:41:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.018 06:41:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.018 06:41:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.018 06:41:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.018 06:41:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.018 06:41:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.018 06:41:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.018 06:41:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:23:45.018 06:41:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:23:45.018 06:41:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.018 06:41:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.018 06:41:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:45.018 06:41:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:45.018 06:41:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.018 06:41:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.018 06:41:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.018 06:41:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.018 06:41:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.019 06:41:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.019 06:41:37 -- paths/export.sh@5 -- # export PATH 00:23:45.019 06:41:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.019 06:41:37 -- nvmf/common.sh@46 -- # : 0 00:23:45.019 06:41:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:45.019 06:41:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:45.019 06:41:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:45.019 06:41:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.019 06:41:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.019 06:41:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:45.019 06:41:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:45.019 06:41:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:45.019 06:41:37 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:45.019 06:41:37 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:45.019 06:41:37 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:45.019 06:41:37 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:45.019 06:41:37 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.019 06:41:37 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:45.019 06:41:37 -- host/multipath.sh@30 -- # nvmftestinit 00:23:45.019 06:41:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:45.019 06:41:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.019 06:41:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:45.019 06:41:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:45.019 06:41:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:45.019 06:41:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.019 06:41:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.019 06:41:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.019 06:41:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:45.019 06:41:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:45.019 06:41:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:45.019 06:41:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:45.019 06:41:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:45.019 06:41:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:45.019 06:41:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.019 06:41:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.019 06:41:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:45.019 06:41:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:45.019 06:41:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:45.019 06:41:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:45.019 06:41:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:45.019 06:41:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.019 06:41:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:45.019 06:41:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:45.019 06:41:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:45.019 06:41:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:45.019 06:41:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:45.019 06:41:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:45.278 Cannot find device "nvmf_tgt_br" 00:23:45.278 06:41:37 -- nvmf/common.sh@154 -- # true 00:23:45.278 06:41:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:45.278 Cannot find device "nvmf_tgt_br2" 00:23:45.278 06:41:37 -- nvmf/common.sh@155 -- # true 00:23:45.278 06:41:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:45.278 06:41:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:45.278 Cannot find device "nvmf_tgt_br" 00:23:45.278 06:41:37 -- nvmf/common.sh@157 -- # true 00:23:45.278 06:41:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:45.278 Cannot find device "nvmf_tgt_br2" 00:23:45.278 06:41:37 -- nvmf/common.sh@158 -- # true 00:23:45.278 06:41:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:45.278 06:41:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:45.278 06:41:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:45.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.278 06:41:37 -- nvmf/common.sh@161 -- # true 00:23:45.278 06:41:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:45.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:45.278 06:41:37 -- nvmf/common.sh@162 -- # true 00:23:45.278 06:41:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:45.278 06:41:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:45.278 06:41:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:45.278 06:41:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:45.278 06:41:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:45.278 06:41:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:45.278 06:41:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:45.278 06:41:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:45.278 06:41:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:45.278 06:41:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:45.278 06:41:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:45.278 06:41:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:45.278 06:41:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:45.278 06:41:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:45.278 06:41:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:45.278 06:41:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:45.278 06:41:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:45.278 06:41:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:45.278 06:41:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:45.278 06:41:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:45.278 06:41:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:45.278 06:41:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:45.537 06:41:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:45.537 06:41:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:45.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:23:45.537 00:23:45.537 --- 10.0.0.2 ping statistics --- 00:23:45.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.537 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:45.537 06:41:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:45.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:45.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:45.537 00:23:45.537 --- 10.0.0.3 ping statistics --- 00:23:45.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.537 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:45.537 06:41:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:45.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:45.537 00:23:45.537 --- 10.0.0.1 ping statistics --- 00:23:45.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.537 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:45.537 06:41:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.537 06:41:37 -- nvmf/common.sh@421 -- # return 0 00:23:45.537 06:41:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:45.537 06:41:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.537 06:41:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:45.537 06:41:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:45.537 06:41:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.537 06:41:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:45.537 06:41:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:45.537 06:41:38 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:45.537 06:41:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:45.537 06:41:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:45.537 06:41:38 -- common/autotest_common.sh@10 -- # set +x 00:23:45.537 06:41:38 -- nvmf/common.sh@469 -- # nvmfpid=98441 00:23:45.537 06:41:38 -- nvmf/common.sh@470 -- # waitforlisten 98441 00:23:45.537 06:41:38 -- common/autotest_common.sh@819 -- # '[' -z 98441 ']' 00:23:45.537 06:41:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.537 06:41:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:45.537 06:41:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.537 06:41:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:45.537 06:41:38 -- common/autotest_common.sh@10 -- # set +x 00:23:45.537 06:41:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:45.537 [2024-10-04 06:41:38.060501] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:23:45.537 [2024-10-04 06:41:38.060762] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.537 [2024-10-04 06:41:38.197734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:45.796 [2024-10-04 06:41:38.272623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:45.796 [2024-10-04 06:41:38.272794] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.796 [2024-10-04 06:41:38.272806] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.796 [2024-10-04 06:41:38.272825] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.796 [2024-10-04 06:41:38.272987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.796 [2024-10-04 06:41:38.273000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.732 06:41:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:46.732 06:41:39 -- common/autotest_common.sh@852 -- # return 0 00:23:46.732 06:41:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:46.732 06:41:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:46.732 06:41:39 -- common/autotest_common.sh@10 -- # set +x 00:23:46.732 06:41:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.732 06:41:39 -- host/multipath.sh@33 -- # nvmfapp_pid=98441 00:23:46.732 06:41:39 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:46.732 [2024-10-04 06:41:39.399985] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.990 06:41:39 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:47.249 Malloc0 00:23:47.249 06:41:39 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:47.508 06:41:39 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.766 06:41:40 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.766 [2024-10-04 06:41:40.432326] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.025 06:41:40 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:48.025 [2024-10-04 06:41:40.648310] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:48.025 06:41:40 -- host/multipath.sh@44 -- # bdevperf_pid=98545 00:23:48.025 06:41:40 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:48.025 06:41:40 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.025 06:41:40 -- host/multipath.sh@47 -- # waitforlisten 98545 /var/tmp/bdevperf.sock 00:23:48.025 06:41:40 -- common/autotest_common.sh@819 -- # '[' -z 98545 ']' 00:23:48.025 06:41:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.025 06:41:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:48.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.025 06:41:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.025 06:41:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:48.025 06:41:40 -- common/autotest_common.sh@10 -- # set +x 00:23:49.400 06:41:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:49.400 06:41:41 -- common/autotest_common.sh@852 -- # return 0 00:23:49.400 06:41:41 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:49.400 06:41:41 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:49.659 Nvme0n1 00:23:49.659 06:41:42 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:50.223 Nvme0n1 00:23:50.223 06:41:42 -- host/multipath.sh@78 -- # sleep 1 00:23:50.223 06:41:42 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:51.162 06:41:43 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:51.162 06:41:43 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.421 06:41:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:51.679 06:41:44 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:51.679 06:41:44 -- host/multipath.sh@65 -- # dtrace_pid=98637 00:23:51.679 06:41:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98441 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:51.679 06:41:44 -- host/multipath.sh@66 -- # sleep 6 00:23:58.240 06:41:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:58.240 06:41:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:58.240 06:41:50 -- host/multipath.sh@67 -- # active_port=4421 00:23:58.240 06:41:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:58.240 Attaching 4 probes... 00:23:58.240 @path[10.0.0.2, 4421]: 21985 00:23:58.240 @path[10.0.0.2, 4421]: 22968 00:23:58.240 @path[10.0.0.2, 4421]: 22991 00:23:58.240 @path[10.0.0.2, 4421]: 22997 00:23:58.240 @path[10.0.0.2, 4421]: 23045 00:23:58.240 06:41:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:58.240 06:41:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:58.240 06:41:50 -- host/multipath.sh@69 -- # sed -n 1p 00:23:58.240 06:41:50 -- host/multipath.sh@69 -- # port=4421 00:23:58.240 06:41:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:58.240 06:41:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:58.240 06:41:50 -- host/multipath.sh@72 -- # kill 98637 00:23:58.240 06:41:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:58.240 06:41:50 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:58.240 06:41:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.240 06:41:50 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:58.500 06:41:50 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:58.500 06:41:50 -- host/multipath.sh@65 -- # dtrace_pid=98769 00:23:58.500 06:41:50 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98441 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:58.500 06:41:50 -- host/multipath.sh@66 -- # sleep 6 00:24:05.062 06:41:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:05.062 06:41:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:05.062 06:41:57 -- host/multipath.sh@67 -- # active_port=4420 00:24:05.062 06:41:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:05.062 Attaching 4 probes... 00:24:05.062 @path[10.0.0.2, 4420]: 22892 00:24:05.062 @path[10.0.0.2, 4420]: 23270 00:24:05.062 @path[10.0.0.2, 4420]: 23221 00:24:05.062 @path[10.0.0.2, 4420]: 23267 00:24:05.062 @path[10.0.0.2, 4420]: 23054 00:24:05.062 06:41:57 -- host/multipath.sh@69 -- # sed -n 1p 00:24:05.062 06:41:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:05.062 06:41:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:05.062 06:41:57 -- host/multipath.sh@69 -- # port=4420 00:24:05.062 06:41:57 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:05.062 06:41:57 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:05.062 06:41:57 -- host/multipath.sh@72 -- # kill 98769 00:24:05.062 06:41:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:05.062 06:41:57 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:05.062 06:41:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:05.062 06:41:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:05.320 06:41:57 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:05.320 06:41:57 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98441 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:05.320 06:41:57 -- host/multipath.sh@65 -- # dtrace_pid=98900 00:24:05.320 06:41:57 -- host/multipath.sh@66 -- # sleep 6 00:24:11.881 06:42:03 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:11.881 06:42:03 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:11.881 06:42:04 -- host/multipath.sh@67 -- # active_port=4421 00:24:11.881 06:42:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:11.881 Attaching 4 probes... 00:24:11.881 @path[10.0.0.2, 4421]: 16289 00:24:11.881 @path[10.0.0.2, 4421]: 22263 00:24:11.881 @path[10.0.0.2, 4421]: 22414 00:24:11.881 @path[10.0.0.2, 4421]: 21977 00:24:11.881 @path[10.0.0.2, 4421]: 21476 00:24:11.881 06:42:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:11.881 06:42:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:11.881 06:42:04 -- host/multipath.sh@69 -- # sed -n 1p 00:24:11.881 06:42:04 -- host/multipath.sh@69 -- # port=4421 00:24:11.882 06:42:04 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:11.882 06:42:04 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:11.882 06:42:04 -- host/multipath.sh@72 -- # kill 98900 00:24:11.882 06:42:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:11.882 06:42:04 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:11.882 06:42:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:11.882 06:42:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:12.159 06:42:04 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:12.159 06:42:04 -- host/multipath.sh@65 -- # dtrace_pid=99036 00:24:12.159 06:42:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98441 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:12.159 06:42:04 -- host/multipath.sh@66 -- # sleep 6 00:24:18.738 06:42:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:18.738 06:42:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:18.738 06:42:10 -- host/multipath.sh@67 -- # active_port= 00:24:18.738 06:42:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:18.738 Attaching 4 probes... 00:24:18.738 00:24:18.738 00:24:18.738 00:24:18.738 00:24:18.738 00:24:18.738 06:42:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:18.738 06:42:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:18.738 06:42:10 -- host/multipath.sh@69 -- # sed -n 1p 00:24:18.738 06:42:10 -- host/multipath.sh@69 -- # port= 00:24:18.738 06:42:10 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:18.738 06:42:10 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:18.738 06:42:10 -- host/multipath.sh@72 -- # kill 99036 00:24:18.738 06:42:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:18.738 06:42:10 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:18.738 06:42:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.738 06:42:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.738 06:42:11 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:18.738 06:42:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98441 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:18.738 06:42:11 -- host/multipath.sh@65 -- # dtrace_pid=99165 00:24:18.738 06:42:11 -- host/multipath.sh@66 -- # sleep 6 00:24:25.302 06:42:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:25.302 06:42:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:25.302 06:42:17 -- host/multipath.sh@67 -- # active_port=4421 00:24:25.302 06:42:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:25.302 Attaching 4 probes... 00:24:25.302 @path[10.0.0.2, 4421]: 21955 00:24:25.302 @path[10.0.0.2, 4421]: 22231 00:24:25.302 @path[10.0.0.2, 4421]: 22302 00:24:25.302 @path[10.0.0.2, 4421]: 22367 00:24:25.302 @path[10.0.0.2, 4421]: 22301 00:24:25.302 06:42:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:25.302 06:42:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:25.302 06:42:17 -- host/multipath.sh@69 -- # sed -n 1p 00:24:25.302 06:42:17 -- host/multipath.sh@69 -- # port=4421 00:24:25.302 06:42:17 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:25.302 06:42:17 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:25.302 06:42:17 -- host/multipath.sh@72 -- # kill 99165 00:24:25.302 06:42:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:25.302 06:42:17 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:25.302 [2024-10-04 06:42:17.974184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.302 [2024-10-04 06:42:17.974403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.303 [2024-10-04 06:42:17.974952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4e70 is same with the state(5) to be set 00:24:25.562 06:42:17 -- host/multipath.sh@101 -- # sleep 1 00:24:26.498 06:42:18 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:26.498 06:42:18 -- host/multipath.sh@65 -- # dtrace_pid=99302 00:24:26.498 06:42:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98441 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:26.498 06:42:18 -- host/multipath.sh@66 -- # sleep 6 00:24:33.072 06:42:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:33.072 06:42:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:33.072 06:42:25 -- host/multipath.sh@67 -- # active_port=4420 00:24:33.072 06:42:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:33.072 Attaching 4 probes... 00:24:33.072 @path[10.0.0.2, 4420]: 21962 00:24:33.072 @path[10.0.0.2, 4420]: 22478 00:24:33.072 @path[10.0.0.2, 4420]: 22500 00:24:33.072 @path[10.0.0.2, 4420]: 22485 00:24:33.072 @path[10.0.0.2, 4420]: 22506 00:24:33.072 06:42:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:33.072 06:42:25 -- host/multipath.sh@69 -- # sed -n 1p 00:24:33.072 06:42:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:33.072 06:42:25 -- host/multipath.sh@69 -- # port=4420 00:24:33.072 06:42:25 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:33.072 06:42:25 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:33.072 06:42:25 -- host/multipath.sh@72 -- # kill 99302 00:24:33.072 06:42:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:33.072 06:42:25 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.072 [2024-10-04 06:42:25.611364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.072 06:42:25 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:33.331 06:42:25 -- host/multipath.sh@111 -- # sleep 6 00:24:39.892 06:42:31 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:39.892 06:42:31 -- host/multipath.sh@65 -- # dtrace_pid=99493 00:24:39.892 06:42:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98441 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:39.892 06:42:31 -- host/multipath.sh@66 -- # sleep 6 00:24:46.476 06:42:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:46.476 06:42:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:46.476 06:42:38 -- host/multipath.sh@67 -- # active_port=4421 00:24:46.476 06:42:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:46.476 Attaching 4 probes... 00:24:46.476 @path[10.0.0.2, 4421]: 21379 00:24:46.476 @path[10.0.0.2, 4421]: 21679 00:24:46.476 @path[10.0.0.2, 4421]: 21818 00:24:46.476 @path[10.0.0.2, 4421]: 21869 00:24:46.476 @path[10.0.0.2, 4421]: 21852 00:24:46.476 06:42:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:46.476 06:42:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:46.476 06:42:38 -- host/multipath.sh@69 -- # sed -n 1p 00:24:46.476 06:42:38 -- host/multipath.sh@69 -- # port=4421 00:24:46.476 06:42:38 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:46.476 06:42:38 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:46.476 06:42:38 -- host/multipath.sh@72 -- # kill 99493 00:24:46.476 06:42:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:46.476 06:42:38 -- host/multipath.sh@114 -- # killprocess 98545 00:24:46.476 06:42:38 -- common/autotest_common.sh@926 -- # '[' -z 98545 ']' 00:24:46.476 06:42:38 -- common/autotest_common.sh@930 -- # kill -0 98545 00:24:46.476 06:42:38 -- common/autotest_common.sh@931 -- # uname 00:24:46.476 06:42:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:46.476 06:42:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98545 00:24:46.476 killing process with pid 98545 00:24:46.476 06:42:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:46.477 06:42:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:46.477 06:42:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98545' 00:24:46.477 06:42:38 -- common/autotest_common.sh@945 -- # kill 98545 00:24:46.477 06:42:38 -- common/autotest_common.sh@950 -- # wait 98545 00:24:46.477 Connection closed with partial response: 00:24:46.477 00:24:46.477 00:24:46.477 06:42:38 -- host/multipath.sh@116 -- # wait 98545 00:24:46.477 06:42:38 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:46.477 [2024-10-04 06:41:40.712591] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:24:46.477 [2024-10-04 06:41:40.712675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98545 ] 00:24:46.477 [2024-10-04 06:41:40.841651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.477 [2024-10-04 06:41:40.914492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.477 Running I/O for 90 seconds... 00:24:46.477 [2024-10-04 06:41:50.977982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.978090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.978161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.978411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.978443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.978864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.978900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.978935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.978969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.978987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.979000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.979028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.979044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.979389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.979411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.979432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.477 [2024-10-04 06:41:50.979446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.979476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.477 [2024-10-04 06:41:50.979492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.477 [2024-10-04 06:41:50.979510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.979622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.979687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.979720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.979839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.979876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.979919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.979970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.979985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.980507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.980890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.980924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.980958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.980977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.980991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.981023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.981056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.981091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.981123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.981161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.981201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.981235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.478 [2024-10-04 06:41:50.981268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.478 [2024-10-04 06:41:50.981301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.478 [2024-10-04 06:41:50.981320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.981446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.981545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.981868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.981965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.981984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.981999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.982072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.982139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.982210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.479 [2024-10-04 06:41:50.982275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.479 [2024-10-04 06:41:50.982511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.479 [2024-10-04 06:41:50.982531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.982545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.982577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:50.982610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.982654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:50.982687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:50.982719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.982752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.982785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:50.982828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:50.982870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.982909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:50.982944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:50.982977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.982996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.983010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.983039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.983053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:50.983072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:50.983087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.503967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.504604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.504979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.504994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.505021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.480 [2024-10-04 06:41:57.505036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.505055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.480 [2024-10-04 06:41:57.505069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.480 [2024-10-04 06:41:57.505088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.505390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.505462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.505561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.505594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.505612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.505626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.506088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.506127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.506193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.506332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.481 [2024-10-04 06:41:57.506364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.481 [2024-10-04 06:41:57.506561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.481 [2024-10-04 06:41:57.506580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.506913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.506945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.506979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.506997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.482 [2024-10-04 06:41:57.507906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.482 [2024-10-04 06:41:57.507931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.482 [2024-10-04 06:41:57.507947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.507965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.507979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.507998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.508416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.508671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.508685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.509363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.509413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.509482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.509673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.509737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.509768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.483 [2024-10-04 06:41:57.509863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.483 [2024-10-04 06:41:57.509900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.483 [2024-10-04 06:41:57.509919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.509933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.509951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.509965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.509983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.509996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.510943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.510981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.510999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.511013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.511042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.511057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.511075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.511088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.511526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.484 [2024-10-04 06:41:57.511559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.484 [2024-10-04 06:41:57.511588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.484 [2024-10-04 06:41:57.511604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.511637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.511671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.511704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.511736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.511767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.511800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.511850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.511883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.511915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.511947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.511967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.511988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.512421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.512453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.512472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.512485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.522385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.522434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.522467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.522505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.522537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.522572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.522604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.485 [2024-10-04 06:41:57.522637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.522669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.485 [2024-10-04 06:41:57.522716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.485 [2024-10-04 06:41:57.522734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.522748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.522767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.522780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.522798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.522812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.522857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.522872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.522891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.522904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.522922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.522936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.522955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.522968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.522986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.522999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.486 [2024-10-04 06:41:57.523733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.486 [2024-10-04 06:41:57.523751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.486 [2024-10-04 06:41:57.523764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.523782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.523795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.523813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.523846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.523865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.523879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.523897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.523911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.523929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.523942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.523960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.523980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.523999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.524012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.524043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.524075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.524811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.524873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.524907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.524941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.524973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.524991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.487 [2024-10-04 06:41:57.525702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.487 [2024-10-04 06:41:57.525878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.487 [2024-10-04 06:41:57.525897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.525911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.525929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.525943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.525961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.525976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.526015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.526401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.526472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.526543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.526561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.526575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.527065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.527137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.527169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.527232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.527397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.488 [2024-10-04 06:41:57.527429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.488 [2024-10-04 06:41:57.527702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.488 [2024-10-04 06:41:57.527720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.527760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.527774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.527792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.527806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.527854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.527881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.527900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.527914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.527933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.527946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.527964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.527977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.527995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.528915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.528980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.528999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.489 [2024-10-04 06:41:57.529012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.489 [2024-10-04 06:41:57.529030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.489 [2024-10-04 06:41:57.529052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.529559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.529736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.529750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.530444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.530484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.530517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.530560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.530595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.530628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.530660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.530693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.530725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.530759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.530791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.530852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.490 [2024-10-04 06:41:57.530886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.490 [2024-10-04 06:41:57.530918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.490 [2024-10-04 06:41:57.530936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.530950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.530968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.530982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.531684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.531976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.531995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.532008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.532039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.532071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.532103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.532135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.532180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.532212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.532672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.491 [2024-10-04 06:41:57.532711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.491 [2024-10-04 06:41:57.532729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.491 [2024-10-04 06:41:57.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.532773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.532788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.532806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.532834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.532856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.532870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.532888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.532903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.532922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.532936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.532953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.532967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.532999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.533031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.533063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.533615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.533647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.533679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.533697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.533711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.539597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.539637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.539671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.539704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.539737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.539771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.492 [2024-10-04 06:41:57.539806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.539859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.539907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.492 [2024-10-04 06:41:57.539926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.492 [2024-10-04 06:41:57.539940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.539959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.539973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.539992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.540953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.540972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.493 [2024-10-04 06:41:57.540986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.493 [2024-10-04 06:41:57.541981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-10-04 06:41:57.541996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.542805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.542975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.542989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.543008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.543044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.543076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.543091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.543110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.543124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.543151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.494 [2024-10-04 06:41:57.543166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.494 [2024-10-04 06:41:57.543185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-10-04 06:41:57.543199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.543558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.543631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.543651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.543665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.544199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.544233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.544298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.544330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.544395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.544539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.495 [2024-10-04 06:41:57.544573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.495 [2024-10-04 06:41:57.544836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.495 [2024-10-04 06:41:57.544854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.544873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.544887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.544913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.544928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.544948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.544962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.544980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.544994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.545954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.545972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.545987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.546005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.546019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.546037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.546051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.546070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.546084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.546102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.496 [2024-10-04 06:41:57.546116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.546134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.546157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.496 [2024-10-04 06:41:57.546177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.496 [2024-10-04 06:41:57.546191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.546624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.546743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.546756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.547702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.547767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.547800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.547863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.547897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.547930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.547962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.547981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.547995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.548027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.548060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.548092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.548136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.548180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.497 [2024-10-04 06:41:57.548212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.497 [2024-10-04 06:41:57.548245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.497 [2024-10-04 06:41:57.548263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.548743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.548984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.548999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.549130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.549198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.549744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.549776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.549871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.498 [2024-10-04 06:41:57.549904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.498 [2024-10-04 06:41:57.549922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.498 [2024-10-04 06:41:57.549936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.549954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.549968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.549987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.550963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.550981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.550995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.551014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.499 [2024-10-04 06:41:57.551051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.551070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.551084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.551110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.551125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.551144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.499 [2024-10-04 06:41:57.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.499 [2024-10-04 06:41:57.551177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.551798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.551970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.551985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.552019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.552051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.552084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.552117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.552150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.552183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.500 [2024-10-04 06:41:57.552215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.500 [2024-10-04 06:41:57.552248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.500 [2024-10-04 06:41:57.552266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.552280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.552299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.552313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.553923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.553976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.553991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.554009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.554023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.554042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.554055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.554074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.554088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.554106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.554121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.554140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.554154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.554172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.501 [2024-10-04 06:41:57.554186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.501 [2024-10-04 06:41:57.554205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.501 [2024-10-04 06:41:57.554220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.554254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.554287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.554325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.554685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.554703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.554723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.502 [2024-10-04 06:41:57.555782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.502 [2024-10-04 06:41:57.555919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.502 [2024-10-04 06:41:57.555934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.555953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.555967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.555986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.555999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.556861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.556964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.556985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.557000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.557018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.557032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.557051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.557065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.557083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.503 [2024-10-04 06:41:57.557098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.557117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.557133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.557152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.503 [2024-10-04 06:41:57.557166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.503 [2024-10-04 06:41:57.557185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.557887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.557941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.557955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.504 [2024-10-04 06:41:57.558925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.504 [2024-10-04 06:41:57.558955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.504 [2024-10-04 06:41:57.558970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.558989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.559906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.559972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.559990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.560004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.505 [2024-10-04 06:41:57.560037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.560070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.560102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.560137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.560169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.560202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.505 [2024-10-04 06:41:57.560241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.505 [2024-10-04 06:41:57.560261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.560275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.560308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.560340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.560372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.560406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.560885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.560925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.560960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.560979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.560993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.561026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.561058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.561135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.561169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.561236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.561367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.561410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.561974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.506 [2024-10-04 06:41:57.561989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.562008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.562021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.562040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.506 [2024-10-04 06:41:57.562054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.506 [2024-10-04 06:41:57.562072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.562890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.562975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.562989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.563112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.507 [2024-10-04 06:41:57.563328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.507 [2024-10-04 06:41:57.563425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.507 [2024-10-04 06:41:57.563456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.563470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.563488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.563502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.563521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.563535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.563553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.563568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.564960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.564978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.564992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.565026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.565058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.565090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.565123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.565155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.565188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.565227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.565261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.565294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.508 [2024-10-04 06:41:57.565336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.565369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.565413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.508 [2024-10-04 06:41:57.565445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.508 [2024-10-04 06:41:57.565464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.565544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.565578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.565610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.565649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.565979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.565993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.566458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.566544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.566642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.566675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.566740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.566773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.566868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.566976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.566995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.567010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.567053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.509 [2024-10-04 06:41:57.567068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.567086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.509 [2024-10-04 06:41:57.567100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.509 [2024-10-04 06:41:57.567119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.567663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.567694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.567726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.567757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.567828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.567907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.567972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.567990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.568004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.568037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.568069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.568102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.568133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.568165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.568197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.568235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.568268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.568301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.568319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.568333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.574421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.510 [2024-10-04 06:41:57.574454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.574477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.574492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.574511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.510 [2024-10-04 06:41:57.574526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.510 [2024-10-04 06:41:57.574544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.574623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.574657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.574691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.574803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.574968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.574981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.575067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.575099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.575130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.575239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.575926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.575961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.575977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.576015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.576053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.576090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.576127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.576165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.511 [2024-10-04 06:41:57.576202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.511 [2024-10-04 06:41:57.576238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.511 [2024-10-04 06:41:57.576262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.576771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.576967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.576981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.577018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.577057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.577094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:41:57.577132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:41:57.577608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:41:57.577628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:42:04.626088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:42:04.626197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:42:04.626261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:42:04.626281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:42:04.626302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:42:04.626317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:42:04.626335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.512 [2024-10-04 06:42:04.626349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:42:04.626367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:42:04.626381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:42:04.626399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.512 [2024-10-04 06:42:04.626413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:46.512 [2024-10-04 06:42:04.626431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.626445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.626540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.626665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.626979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.626998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.627013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.627059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.627091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.627125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.627654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.627698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.627736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.627772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.627808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.627872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.627918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.627956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.627979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.627993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.628015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.628030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.628052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.628066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.628087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.628101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.628122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.513 [2024-10-04 06:42:04.628137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.628160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.628175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:46.513 [2024-10-04 06:42:04.628196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.513 [2024-10-04 06:42:04.628215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.628250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.628285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.628399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.628434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.628470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.628579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.628650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.628981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.628995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.629030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.629101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.629252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.629336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.629503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.629617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.514 [2024-10-04 06:42:04.629731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:46.514 [2024-10-04 06:42:04.629793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.514 [2024-10-04 06:42:04.629807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.629858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.629874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.629906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.629922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.629947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.629961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.629985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.629999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.630074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.630559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.630636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.630839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.630882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.630925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.630963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.630987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.631002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.631035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:04.631053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:04.631077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.515 [2024-10-04 06:42:04.631091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:17.975469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:17.975510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:17.975536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:17.975562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:17.975588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:17.975613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.515 [2024-10-04 06:42:17.975638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.515 [2024-10-04 06:42:17.975669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.975972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.975985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.516 [2024-10-04 06:42:17.976289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.516 [2024-10-04 06:42:17.976551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.516 [2024-10-04 06:42:17.976652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.516 [2024-10-04 06:42:17.976683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.516 [2024-10-04 06:42:17.976696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.976708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.976734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.976979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.976992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7608 len:8 S 06:42:38 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.517 GL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.517 [2024-10-04 06:42:17.977667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.517 [2024-10-04 06:42:17.977724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.517 [2024-10-04 06:42:17.977737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.977756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.977782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.977808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.977846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.977871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.977897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.977922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.977947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.977980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.977994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.518 [2024-10-04 06:42:17.978558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.518 [2024-10-04 06:42:17.978584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.518 [2024-10-04 06:42:17.978597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:46.519 [2024-10-04 06:42:17.978639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.519 [2024-10-04 06:42:17.978872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e44120 is same with the state(5) to be set 00:24:46.519 [2024-10-04 06:42:17.978901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.519 [2024-10-04 06:42:17.978910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.519 [2024-10-04 06:42:17.978921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8024 len:8 PRP1 0x0 PRP2 0x0 00:24:46.519 [2024-10-04 06:42:17.978932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.519 [2024-10-04 06:42:17.978998] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e44120 was disconnected and freed. reset controller. 00:24:46.519 [2024-10-04 06:42:17.980156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.519 [2024-10-04 06:42:17.980235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54b60 (9): Bad file descriptor 00:24:46.519 [2024-10-04 06:42:17.980405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.519 [2024-10-04 06:42:17.980460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.519 [2024-10-04 06:42:17.980480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54b60 with addr=10.0.0.2, port=4421 00:24:46.519 [2024-10-04 06:42:17.980500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54b60 is same with the state(5) to be set 00:24:46.519 [2024-10-04 06:42:17.980526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54b60 (9): Bad file descriptor 00:24:46.519 [2024-10-04 06:42:17.980554] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.519 [2024-10-04 06:42:17.980568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.519 [2024-10-04 06:42:17.980593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.519 [2024-10-04 06:42:17.980618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.519 [2024-10-04 06:42:17.980631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.519 [2024-10-04 06:42:28.029059] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:46.519 Received shutdown signal, test time was about 55.518272 seconds 00:24:46.519 00:24:46.519 Latency(us) 00:24:46.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.519 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.519 Verification LBA range: start 0x0 length 0x4000 00:24:46.519 Nvme0n1 : 55.52 12768.77 49.88 0.00 0.00 10009.12 770.79 7015926.69 00:24:46.519 =================================================================================================================== 00:24:46.519 Total : 12768.77 49.88 0.00 0.00 10009.12 770.79 7015926.69 00:24:46.519 06:42:38 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:46.519 06:42:38 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:46.519 06:42:38 -- host/multipath.sh@125 -- # nvmftestfini 00:24:46.519 06:42:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:46.519 06:42:38 -- nvmf/common.sh@116 -- # sync 00:24:46.519 06:42:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:46.519 06:42:38 -- nvmf/common.sh@119 -- # set +e 00:24:46.519 06:42:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:46.519 06:42:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:46.519 rmmod nvme_tcp 00:24:46.519 rmmod nvme_fabrics 00:24:46.519 rmmod nvme_keyring 00:24:46.519 06:42:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:46.519 06:42:38 -- nvmf/common.sh@123 -- # set -e 00:24:46.519 06:42:38 -- nvmf/common.sh@124 -- # return 0 00:24:46.519 06:42:38 -- nvmf/common.sh@477 -- # '[' -n 98441 ']' 00:24:46.519 06:42:38 -- nvmf/common.sh@478 -- # killprocess 98441 00:24:46.519 06:42:38 -- common/autotest_common.sh@926 -- # '[' -z 98441 ']' 00:24:46.519 06:42:38 -- common/autotest_common.sh@930 -- # kill -0 98441 00:24:46.519 06:42:38 -- common/autotest_common.sh@931 -- # uname 00:24:46.519 06:42:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:46.519 06:42:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98441 00:24:46.519 killing process with pid 98441 00:24:46.519 06:42:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:46.519 06:42:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:46.519 06:42:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98441' 00:24:46.519 06:42:38 -- common/autotest_common.sh@945 -- # kill 98441 00:24:46.519 06:42:38 -- common/autotest_common.sh@950 -- # wait 98441 00:24:46.779 06:42:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:46.779 06:42:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:46.779 06:42:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:46.779 06:42:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.779 06:42:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:46.779 06:42:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.779 06:42:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.779 06:42:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.779 06:42:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:46.779 00:24:46.779 real 1m1.734s 00:24:46.779 user 2m54.193s 00:24:46.779 sys 0m14.167s 00:24:46.779 06:42:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.779 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:24:46.779 ************************************ 00:24:46.779 END TEST nvmf_multipath 00:24:46.779 ************************************ 00:24:46.779 06:42:39 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:46.779 06:42:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:46.779 06:42:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:46.779 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:24:46.779 ************************************ 00:24:46.779 START TEST nvmf_timeout 00:24:46.779 ************************************ 00:24:46.779 06:42:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:46.779 * Looking for test storage... 00:24:46.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:46.779 06:42:39 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:46.779 06:42:39 -- nvmf/common.sh@7 -- # uname -s 00:24:46.779 06:42:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.779 06:42:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.779 06:42:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.779 06:42:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.779 06:42:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.779 06:42:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.779 06:42:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.779 06:42:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.779 06:42:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.779 06:42:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.779 06:42:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:24:46.779 06:42:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:24:46.779 06:42:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.779 06:42:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.779 06:42:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:46.779 06:42:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.779 06:42:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.779 06:42:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.779 06:42:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.779 06:42:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.779 06:42:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.779 06:42:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.780 06:42:39 -- paths/export.sh@5 -- # export PATH 00:24:46.780 06:42:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.780 06:42:39 -- nvmf/common.sh@46 -- # : 0 00:24:46.780 06:42:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:46.780 06:42:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:46.780 06:42:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:46.780 06:42:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.780 06:42:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.780 06:42:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:46.780 06:42:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:46.780 06:42:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:46.780 06:42:39 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:46.780 06:42:39 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:46.780 06:42:39 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.780 06:42:39 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:46.780 06:42:39 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.780 06:42:39 -- host/timeout.sh@19 -- # nvmftestinit 00:24:46.780 06:42:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:46.780 06:42:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.780 06:42:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:46.780 06:42:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:46.780 06:42:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:46.780 06:42:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.780 06:42:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.780 06:42:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.780 06:42:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:46.780 06:42:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:46.780 06:42:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:46.780 06:42:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:46.780 06:42:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:46.780 06:42:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:46.780 06:42:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.780 06:42:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.780 06:42:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:46.780 06:42:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:46.780 06:42:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:46.780 06:42:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:46.780 06:42:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:46.780 06:42:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.780 06:42:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:46.780 06:42:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:46.780 06:42:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:46.780 06:42:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:46.780 06:42:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:46.780 06:42:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:47.039 Cannot find device "nvmf_tgt_br" 00:24:47.039 06:42:39 -- nvmf/common.sh@154 -- # true 00:24:47.039 06:42:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:47.039 Cannot find device "nvmf_tgt_br2" 00:24:47.039 06:42:39 -- nvmf/common.sh@155 -- # true 00:24:47.039 06:42:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:47.039 06:42:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:47.039 Cannot find device "nvmf_tgt_br" 00:24:47.039 06:42:39 -- nvmf/common.sh@157 -- # true 00:24:47.039 06:42:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:47.039 Cannot find device "nvmf_tgt_br2" 00:24:47.039 06:42:39 -- nvmf/common.sh@158 -- # true 00:24:47.039 06:42:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:47.039 06:42:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:47.039 06:42:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:47.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.039 06:42:39 -- nvmf/common.sh@161 -- # true 00:24:47.039 06:42:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:47.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.039 06:42:39 -- nvmf/common.sh@162 -- # true 00:24:47.039 06:42:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:47.039 06:42:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:47.039 06:42:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:47.039 06:42:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:47.039 06:42:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:47.039 06:42:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:47.039 06:42:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:47.039 06:42:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:47.039 06:42:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:47.039 06:42:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:47.039 06:42:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:47.039 06:42:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:47.039 06:42:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:47.039 06:42:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:47.039 06:42:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:47.039 06:42:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:47.039 06:42:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:47.039 06:42:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:47.039 06:42:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:47.298 06:42:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:47.298 06:42:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:47.298 06:42:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:47.298 06:42:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:47.298 06:42:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:47.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:24:47.298 00:24:47.298 --- 10.0.0.2 ping statistics --- 00:24:47.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.298 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:47.298 06:42:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:47.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:47.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:24:47.298 00:24:47.298 --- 10.0.0.3 ping statistics --- 00:24:47.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.298 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:47.298 06:42:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:47.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:24:47.298 00:24:47.298 --- 10.0.0.1 ping statistics --- 00:24:47.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.298 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:47.298 06:42:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.298 06:42:39 -- nvmf/common.sh@421 -- # return 0 00:24:47.298 06:42:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:47.298 06:42:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.298 06:42:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:47.298 06:42:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:47.298 06:42:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.298 06:42:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:47.298 06:42:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:47.298 06:42:39 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:47.298 06:42:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:47.298 06:42:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:47.298 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:24:47.298 06:42:39 -- nvmf/common.sh@469 -- # nvmfpid=99815 00:24:47.298 06:42:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:47.298 06:42:39 -- nvmf/common.sh@470 -- # waitforlisten 99815 00:24:47.298 06:42:39 -- common/autotest_common.sh@819 -- # '[' -z 99815 ']' 00:24:47.298 06:42:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.298 06:42:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:47.298 06:42:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.298 06:42:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:47.298 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:24:47.298 [2024-10-04 06:42:39.869480] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:24:47.298 [2024-10-04 06:42:39.869561] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.557 [2024-10-04 06:42:40.000082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:47.557 [2024-10-04 06:42:40.108893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:47.557 [2024-10-04 06:42:40.109047] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.557 [2024-10-04 06:42:40.109059] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.557 [2024-10-04 06:42:40.109067] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.557 [2024-10-04 06:42:40.109613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.557 [2024-10-04 06:42:40.109621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.492 06:42:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:48.492 06:42:40 -- common/autotest_common.sh@852 -- # return 0 00:24:48.492 06:42:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:48.492 06:42:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:48.492 06:42:40 -- common/autotest_common.sh@10 -- # set +x 00:24:48.492 06:42:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.492 06:42:40 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:48.492 06:42:40 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:48.749 [2024-10-04 06:42:41.227488] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.749 06:42:41 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:49.007 Malloc0 00:24:49.007 06:42:41 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.266 06:42:41 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.525 06:42:42 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.783 [2024-10-04 06:42:42.252491] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.783 06:42:42 -- host/timeout.sh@32 -- # bdevperf_pid=99906 00:24:49.783 06:42:42 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:49.783 06:42:42 -- host/timeout.sh@34 -- # waitforlisten 99906 /var/tmp/bdevperf.sock 00:24:49.783 06:42:42 -- common/autotest_common.sh@819 -- # '[' -z 99906 ']' 00:24:49.783 06:42:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.783 06:42:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:49.783 06:42:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.783 06:42:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:49.783 06:42:42 -- common/autotest_common.sh@10 -- # set +x 00:24:49.783 [2024-10-04 06:42:42.324641] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:24:49.783 [2024-10-04 06:42:42.324756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99906 ] 00:24:49.783 [2024-10-04 06:42:42.461693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.042 [2024-10-04 06:42:42.546926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.636 06:42:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:50.636 06:42:43 -- common/autotest_common.sh@852 -- # return 0 00:24:50.636 06:42:43 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:50.895 06:42:43 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:51.153 NVMe0n1 00:24:51.411 06:42:43 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.411 06:42:43 -- host/timeout.sh@51 -- # rpc_pid=99954 00:24:51.411 06:42:43 -- host/timeout.sh@53 -- # sleep 1 00:24:51.411 Running I/O for 10 seconds... 00:24:52.352 06:42:44 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.613 [2024-10-04 06:42:45.109015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.613 [2024-10-04 06:42:45.109177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25faa60 is same with the state(5) to be set 00:24:52.614 [2024-10-04 06:42:45.109928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.109959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.109981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.109990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.614 [2024-10-04 06:42:45.110357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.614 [2024-10-04 06:42:45.110365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.110963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.110991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.110999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.111008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.111015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.111033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.111043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.111053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.615 [2024-10-04 06:42:45.111061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.111070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.111079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.111088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.111096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.615 [2024-10-04 06:42:45.111107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.615 [2024-10-04 06:42:45.111115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.616 [2024-10-04 06:42:45.111757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.616 [2024-10-04 06:42:45.111864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.616 [2024-10-04 06:42:45.111872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.111881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.111889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.111898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.111906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.111915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.111923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.111932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.111940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.111949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.111957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.111967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.111974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.111983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.111991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.617 [2024-10-04 06:42:45.112182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.617 [2024-10-04 06:42:45.112307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce1b80 is same with the state(5) to be set 00:24:52.617 [2024-10-04 06:42:45.112327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:52.617 [2024-10-04 06:42:45.112333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:52.617 [2024-10-04 06:42:45.112345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13256 len:8 PRP1 0x0 PRP2 0x0 00:24:52.617 [2024-10-04 06:42:45.112354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.617 [2024-10-04 06:42:45.112413] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xce1b80 was disconnected and freed. reset controller. 00:24:52.617 [2024-10-04 06:42:45.112620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.617 [2024-10-04 06:42:45.112687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb0250 (9): Bad file descriptor 00:24:52.617 [2024-10-04 06:42:45.112804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.617 [2024-10-04 06:42:45.112876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.617 [2024-10-04 06:42:45.112892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb0250 with addr=10.0.0.2, port=4420 00:24:52.617 [2024-10-04 06:42:45.112902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb0250 is same with the state(5) to be set 00:24:52.617 [2024-10-04 06:42:45.112919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb0250 (9): Bad file descriptor 00:24:52.617 [2024-10-04 06:42:45.112934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.617 [2024-10-04 06:42:45.112944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.617 [2024-10-04 06:42:45.112955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.617 [2024-10-04 06:42:45.112973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.617 [2024-10-04 06:42:45.112982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.617 06:42:45 -- host/timeout.sh@56 -- # sleep 2 00:24:54.521 [2024-10-04 06:42:47.113080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.521 [2024-10-04 06:42:47.113169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.521 [2024-10-04 06:42:47.113187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb0250 with addr=10.0.0.2, port=4420 00:24:54.521 [2024-10-04 06:42:47.113200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb0250 is same with the state(5) to be set 00:24:54.521 [2024-10-04 06:42:47.113231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb0250 (9): Bad file descriptor 00:24:54.521 [2024-10-04 06:42:47.113257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.521 [2024-10-04 06:42:47.113268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.521 [2024-10-04 06:42:47.113277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.521 [2024-10-04 06:42:47.113301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.521 [2024-10-04 06:42:47.113312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.521 06:42:47 -- host/timeout.sh@57 -- # get_controller 00:24:54.521 06:42:47 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.521 06:42:47 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:54.778 06:42:47 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:54.778 06:42:47 -- host/timeout.sh@58 -- # get_bdev 00:24:54.778 06:42:47 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:54.778 06:42:47 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:55.035 06:42:47 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:55.036 06:42:47 -- host/timeout.sh@61 -- # sleep 5 00:24:56.938 [2024-10-04 06:42:49.113523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.938 [2024-10-04 06:42:49.113631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.938 [2024-10-04 06:42:49.113650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb0250 with addr=10.0.0.2, port=4420 00:24:56.938 [2024-10-04 06:42:49.113666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb0250 is same with the state(5) to be set 00:24:56.938 [2024-10-04 06:42:49.113695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb0250 (9): Bad file descriptor 00:24:56.938 [2024-10-04 06:42:49.113716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.938 [2024-10-04 06:42:49.113725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.938 [2024-10-04 06:42:49.113737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.938 [2024-10-04 06:42:49.113767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.938 [2024-10-04 06:42:49.113779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:58.841 [2024-10-04 06:42:51.113819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.841 [2024-10-04 06:42:51.113926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.841 [2024-10-04 06:42:51.113939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.841 [2024-10-04 06:42:51.113950] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:58.841 [2024-10-04 06:42:51.113981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.777 00:24:59.777 Latency(us) 00:24:59.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.777 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:59.777 Verification LBA range: start 0x0 length 0x4000 00:24:59.777 NVMe0n1 : 8.17 2199.54 8.59 15.66 0.00 57708.37 2502.28 7015926.69 00:24:59.778 =================================================================================================================== 00:24:59.778 Total : 2199.54 8.59 15.66 0.00 57708.37 2502.28 7015926.69 00:24:59.778 0 00:25:00.036 06:42:52 -- host/timeout.sh@62 -- # get_controller 00:25:00.036 06:42:52 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.036 06:42:52 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:00.295 06:42:52 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:00.295 06:42:52 -- host/timeout.sh@63 -- # get_bdev 00:25:00.295 06:42:52 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:00.295 06:42:52 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:00.863 06:42:53 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:00.863 06:42:53 -- host/timeout.sh@65 -- # wait 99954 00:25:00.863 06:42:53 -- host/timeout.sh@67 -- # killprocess 99906 00:25:00.863 06:42:53 -- common/autotest_common.sh@926 -- # '[' -z 99906 ']' 00:25:00.863 06:42:53 -- common/autotest_common.sh@930 -- # kill -0 99906 00:25:00.863 06:42:53 -- common/autotest_common.sh@931 -- # uname 00:25:00.863 06:42:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:00.863 06:42:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99906 00:25:00.863 06:42:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:00.863 killing process with pid 99906 00:25:00.863 Received shutdown signal, test time was about 9.339974 seconds 00:25:00.863 00:25:00.863 Latency(us) 00:25:00.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.863 =================================================================================================================== 00:25:00.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.863 06:42:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:00.863 06:42:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99906' 00:25:00.863 06:42:53 -- common/autotest_common.sh@945 -- # kill 99906 00:25:00.863 06:42:53 -- common/autotest_common.sh@950 -- # wait 99906 00:25:01.122 06:42:53 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.380 [2024-10-04 06:42:53.817093] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.380 06:42:53 -- host/timeout.sh@74 -- # bdevperf_pid=100113 00:25:01.380 06:42:53 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:01.380 06:42:53 -- host/timeout.sh@76 -- # waitforlisten 100113 /var/tmp/bdevperf.sock 00:25:01.380 06:42:53 -- common/autotest_common.sh@819 -- # '[' -z 100113 ']' 00:25:01.380 06:42:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.380 06:42:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:01.380 06:42:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.380 06:42:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:01.380 06:42:53 -- common/autotest_common.sh@10 -- # set +x 00:25:01.380 [2024-10-04 06:42:53.883071] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:25:01.380 [2024-10-04 06:42:53.883160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100113 ] 00:25:01.380 [2024-10-04 06:42:54.015795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.646 [2024-10-04 06:42:54.095427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.214 06:42:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:02.214 06:42:54 -- common/autotest_common.sh@852 -- # return 0 00:25:02.214 06:42:54 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:02.473 06:42:55 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:03.039 NVMe0n1 00:25:03.039 06:42:55 -- host/timeout.sh@84 -- # rpc_pid=100159 00:25:03.039 06:42:55 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.039 06:42:55 -- host/timeout.sh@86 -- # sleep 1 00:25:03.039 Running I/O for 10 seconds... 00:25:03.971 06:42:56 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.231 [2024-10-04 06:42:56.729771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.729995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fed90 is same with the state(5) to be set 00:25:04.231 [2024-10-04 06:42:56.730628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.231 [2024-10-04 06:42:56.730868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.231 [2024-10-04 06:42:56.730876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.730887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.730895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.730905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.730913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.730923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.730931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.730941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.730950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.730960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.730968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.730977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.730986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.730996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.232 [2024-10-04 06:42:56.731647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.232 [2024-10-04 06:42:56.731657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.232 [2024-10-04 06:42:56.731666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.731703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.731792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.731809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.731925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.731943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.731991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.731999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.732318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.732336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.732354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.233 [2024-10-04 06:42:56.732441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.233 [2024-10-04 06:42:56.732470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.233 [2024-10-04 06:42:56.732478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.732947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.234 [2024-10-04 06:42:56.732989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.732999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.234 [2024-10-04 06:42:56.733308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.234 [2024-10-04 06:42:56.733318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8c9c0 is same with the state(5) to be set 00:25:04.235 [2024-10-04 06:42:56.733330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.235 [2024-10-04 06:42:56.733338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.235 [2024-10-04 06:42:56.733345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121152 len:8 PRP1 0x0 PRP2 0x0 00:25:04.235 [2024-10-04 06:42:56.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.235 [2024-10-04 06:42:56.733433] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc8c9c0 was disconnected and freed. reset controller. 00:25:04.235 [2024-10-04 06:42:56.733658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.235 [2024-10-04 06:42:56.733766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:04.235 [2024-10-04 06:42:56.733924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-10-04 06:42:56.733975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.235 [2024-10-04 06:42:56.733991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5b090 with addr=10.0.0.2, port=4420 00:25:04.235 [2024-10-04 06:42:56.734001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b090 is same with the state(5) to be set 00:25:04.235 [2024-10-04 06:42:56.734019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:04.235 [2024-10-04 06:42:56.734035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:04.235 [2024-10-04 06:42:56.734045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:04.235 [2024-10-04 06:42:56.734055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.235 [2024-10-04 06:42:56.734074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.235 [2024-10-04 06:42:56.734085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.235 06:42:56 -- host/timeout.sh@90 -- # sleep 1 00:25:05.170 [2024-10-04 06:42:57.734171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.170 [2024-10-04 06:42:57.734249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.170 [2024-10-04 06:42:57.734274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5b090 with addr=10.0.0.2, port=4420 00:25:05.170 [2024-10-04 06:42:57.734285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b090 is same with the state(5) to be set 00:25:05.170 [2024-10-04 06:42:57.734302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:05.170 [2024-10-04 06:42:57.734316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.170 [2024-10-04 06:42:57.734326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.170 [2024-10-04 06:42:57.734334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.170 [2024-10-04 06:42:57.734351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.170 [2024-10-04 06:42:57.734362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.170 06:42:57 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.428 [2024-10-04 06:42:58.036536] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.428 06:42:58 -- host/timeout.sh@92 -- # wait 100159 00:25:06.420 [2024-10-04 06:42:58.752875] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:12.990 00:25:12.990 Latency(us) 00:25:12.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.990 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:12.990 Verification LBA range: start 0x0 length 0x4000 00:25:12.990 NVMe0n1 : 10.01 10596.25 41.39 0.00 0.00 12060.27 1027.72 3019898.88 00:25:12.990 =================================================================================================================== 00:25:12.990 Total : 10596.25 41.39 0.00 0.00 12060.27 1027.72 3019898.88 00:25:12.990 0 00:25:12.990 06:43:05 -- host/timeout.sh@97 -- # rpc_pid=100277 00:25:12.990 06:43:05 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.990 06:43:05 -- host/timeout.sh@98 -- # sleep 1 00:25:13.248 Running I/O for 10 seconds... 00:25:14.188 06:43:06 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.188 [2024-10-04 06:43:06.854769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.854997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.188 [2024-10-04 06:43:06.855277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.189 [2024-10-04 06:43:06.855285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.189 [2024-10-04 06:43:06.855293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.189 [2024-10-04 06:43:06.855301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24589c0 is same with the state(5) to be set 00:25:14.189 [2024-10-04 06:43:06.855581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.855771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.855790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.855845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.855864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.855969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.855978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.856088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.856106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.856142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.856160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.856196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.189 [2024-10-04 06:43:06.856214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.189 [2024-10-04 06:43:06.856255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.189 [2024-10-04 06:43:06.856264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.190 [2024-10-04 06:43:06.856760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.190 [2024-10-04 06:43:06.856860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.190 [2024-10-04 06:43:06.856869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.856879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.856888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.856898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.856907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.856917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.191 [2024-10-04 06:43:06.856925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.856934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.856942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.856953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.856961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.856971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.856980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.856989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.856998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.191 [2024-10-04 06:43:06.857236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.191 [2024-10-04 06:43:06.857253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.191 [2024-10-04 06:43:06.857289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.191 [2024-10-04 06:43:06.857380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.191 [2024-10-04 06:43:06.857407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.191 [2024-10-04 06:43:06.857462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.191 [2024-10-04 06:43:06.857486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.191 [2024-10-04 06:43:06.857496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.192 [2024-10-04 06:43:06.857965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.857991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.857999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.858024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.858044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.858063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.858081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.192 [2024-10-04 06:43:06.858106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89960 is same with the state(5) to be set 00:25:14.192 [2024-10-04 06:43:06.858128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:14.192 [2024-10-04 06:43:06.858135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:14.192 [2024-10-04 06:43:06.858144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116432 len:8 PRP1 0x0 PRP2 0x0 00:25:14.192 [2024-10-04 06:43:06.858153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858196] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc89960 was disconnected and freed. reset controller. 00:25:14.192 [2024-10-04 06:43:06.858264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.192 [2024-10-04 06:43:06.858278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.192 [2024-10-04 06:43:06.858288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.193 [2024-10-04 06:43:06.858296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.193 [2024-10-04 06:43:06.858306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.193 [2024-10-04 06:43:06.858315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.193 [2024-10-04 06:43:06.858324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.193 [2024-10-04 06:43:06.858332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.193 [2024-10-04 06:43:06.858340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b090 is same with the state(5) to be set 00:25:14.193 [2024-10-04 06:43:06.858541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.193 [2024-10-04 06:43:06.858571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:14.193 [2024-10-04 06:43:06.858666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.193 [2024-10-04 06:43:06.858717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.193 [2024-10-04 06:43:06.858733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5b090 with addr=10.0.0.2, port=4420 00:25:14.193 [2024-10-04 06:43:06.858742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b090 is same with the state(5) to be set 00:25:14.193 [2024-10-04 06:43:06.858760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:14.193 [2024-10-04 06:43:06.858775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.193 [2024-10-04 06:43:06.858791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.193 [2024-10-04 06:43:06.858801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.452 [2024-10-04 06:43:06.869577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.452 [2024-10-04 06:43:06.869610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.452 06:43:06 -- host/timeout.sh@101 -- # sleep 3 00:25:15.389 [2024-10-04 06:43:07.869800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.389 [2024-10-04 06:43:07.869950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.389 [2024-10-04 06:43:07.869970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5b090 with addr=10.0.0.2, port=4420 00:25:15.389 [2024-10-04 06:43:07.869986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b090 is same with the state(5) to be set 00:25:15.389 [2024-10-04 06:43:07.870020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:15.389 [2024-10-04 06:43:07.870053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.389 [2024-10-04 06:43:07.870065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.389 [2024-10-04 06:43:07.870077] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.389 [2024-10-04 06:43:07.870109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.389 [2024-10-04 06:43:07.870121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.325 [2024-10-04 06:43:08.870206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.325 [2024-10-04 06:43:08.870301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.325 [2024-10-04 06:43:08.870318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5b090 with addr=10.0.0.2, port=4420 00:25:16.325 [2024-10-04 06:43:08.870329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b090 is same with the state(5) to be set 00:25:16.325 [2024-10-04 06:43:08.870347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:16.325 [2024-10-04 06:43:08.870382] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.325 [2024-10-04 06:43:08.870392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.325 [2024-10-04 06:43:08.870401] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.325 [2024-10-04 06:43:08.870420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.325 [2024-10-04 06:43:08.870430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.263 [2024-10-04 06:43:09.870782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-10-04 06:43:09.870937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.263 [2024-10-04 06:43:09.870955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5b090 with addr=10.0.0.2, port=4420 00:25:17.263 [2024-10-04 06:43:09.870970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b090 is same with the state(5) to be set 00:25:17.263 [2024-10-04 06:43:09.871153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b090 (9): Bad file descriptor 00:25:17.263 [2024-10-04 06:43:09.871255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.263 [2024-10-04 06:43:09.871268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.263 [2024-10-04 06:43:09.871280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.263 [2024-10-04 06:43:09.873437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.263 [2024-10-04 06:43:09.873476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.263 06:43:09 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.521 [2024-10-04 06:43:10.133176] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.521 06:43:10 -- host/timeout.sh@103 -- # wait 100277 00:25:18.455 [2024-10-04 06:43:10.897349] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:23.724 00:25:23.724 Latency(us) 00:25:23.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.724 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:23.724 Verification LBA range: start 0x0 length 0x4000 00:25:23.724 NVMe0n1 : 10.01 7713.83 30.13 7046.49 0.00 8660.76 860.16 3019898.88 00:25:23.724 =================================================================================================================== 00:25:23.724 Total : 7713.83 30.13 7046.49 0.00 8660.76 0.00 3019898.88 00:25:23.724 0 00:25:23.724 06:43:15 -- host/timeout.sh@105 -- # killprocess 100113 00:25:23.724 06:43:15 -- common/autotest_common.sh@926 -- # '[' -z 100113 ']' 00:25:23.724 06:43:15 -- common/autotest_common.sh@930 -- # kill -0 100113 00:25:23.724 06:43:15 -- common/autotest_common.sh@931 -- # uname 00:25:23.724 06:43:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.724 06:43:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100113 00:25:23.724 killing process with pid 100113 00:25:23.724 Received shutdown signal, test time was about 10.000000 seconds 00:25:23.724 00:25:23.724 Latency(us) 00:25:23.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.724 =================================================================================================================== 00:25:23.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.724 06:43:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:23.724 06:43:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:23.724 06:43:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100113' 00:25:23.724 06:43:15 -- common/autotest_common.sh@945 -- # kill 100113 00:25:23.724 06:43:15 -- common/autotest_common.sh@950 -- # wait 100113 00:25:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.724 06:43:16 -- host/timeout.sh@110 -- # bdevperf_pid=100398 00:25:23.724 06:43:16 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:23.724 06:43:16 -- host/timeout.sh@112 -- # waitforlisten 100398 /var/tmp/bdevperf.sock 00:25:23.724 06:43:16 -- common/autotest_common.sh@819 -- # '[' -z 100398 ']' 00:25:23.724 06:43:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.724 06:43:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:23.724 06:43:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.724 06:43:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:23.724 06:43:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.724 [2024-10-04 06:43:16.088270] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:25:23.724 [2024-10-04 06:43:16.088617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100398 ] 00:25:23.724 [2024-10-04 06:43:16.224414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.724 [2024-10-04 06:43:16.283686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.715 06:43:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:24.715 06:43:17 -- common/autotest_common.sh@852 -- # return 0 00:25:24.715 06:43:17 -- host/timeout.sh@116 -- # dtrace_pid=100426 00:25:24.715 06:43:17 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:24.715 06:43:17 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100398 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:24.715 06:43:17 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:24.973 NVMe0n1 00:25:24.973 06:43:17 -- host/timeout.sh@124 -- # rpc_pid=100480 00:25:24.973 06:43:17 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.973 06:43:17 -- host/timeout.sh@125 -- # sleep 1 00:25:25.230 Running I/O for 10 seconds... 00:25:26.164 06:43:18 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.425 [2024-10-04 06:43:18.892263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.425 [2024-10-04 06:43:18.892447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.892932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245bdb0 is same with the state(5) to be set 00:25:26.426 [2024-10-04 06:43:18.893268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.893984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.893995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.426 [2024-10-04 06:43:18.894453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.426 [2024-10-04 06:43:18.894461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.894983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.894991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.427 [2024-10-04 06:43:18.895815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fdfd10 is same with the state(5) to be set 00:25:26.427 [2024-10-04 06:43:18.895861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:26.427 [2024-10-04 06:43:18.895869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:26.427 [2024-10-04 06:43:18.895906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68776 len:8 PRP1 0x0 PRP2 0x0 00:25:26.427 [2024-10-04 06:43:18.895916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.427 [2024-10-04 06:43:18.895980] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fdfd10 was disconnected and freed. reset controller. 00:25:26.427 [2024-10-04 06:43:18.896239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.427 [2024-10-04 06:43:18.896306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae0b0 (9): Bad file descriptor 00:25:26.427 [2024-10-04 06:43:18.896433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.427 [2024-10-04 06:43:18.896479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.427 [2024-10-04 06:43:18.896493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae0b0 with addr=10.0.0.2, port=4420 00:25:26.427 [2024-10-04 06:43:18.896503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae0b0 is same with the state(5) to be set 00:25:26.427 [2024-10-04 06:43:18.896519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae0b0 (9): Bad file descriptor 00:25:26.427 [2024-10-04 06:43:18.896534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:26.427 [2024-10-04 06:43:18.896543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:26.427 [2024-10-04 06:43:18.896553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.427 [2024-10-04 06:43:18.896571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:26.427 [2024-10-04 06:43:18.896581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.427 06:43:18 -- host/timeout.sh@128 -- # wait 100480 00:25:28.325 [2024-10-04 06:43:20.896736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-10-04 06:43:20.896857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-10-04 06:43:20.896877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae0b0 with addr=10.0.0.2, port=4420 00:25:28.325 [2024-10-04 06:43:20.896891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae0b0 is same with the state(5) to be set 00:25:28.325 [2024-10-04 06:43:20.896915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae0b0 (9): Bad file descriptor 00:25:28.325 [2024-10-04 06:43:20.896944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.325 [2024-10-04 06:43:20.896956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.325 [2024-10-04 06:43:20.896967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.325 [2024-10-04 06:43:20.896992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.325 [2024-10-04 06:43:20.897003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.222 [2024-10-04 06:43:22.897085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-10-04 06:43:22.897171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.222 [2024-10-04 06:43:22.897188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fae0b0 with addr=10.0.0.2, port=4420 00:25:30.222 [2024-10-04 06:43:22.897199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fae0b0 is same with the state(5) to be set 00:25:30.222 [2024-10-04 06:43:22.897217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fae0b0 (9): Bad file descriptor 00:25:30.222 [2024-10-04 06:43:22.897248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.222 [2024-10-04 06:43:22.897257] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.222 [2024-10-04 06:43:22.897266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.222 [2024-10-04 06:43:22.897284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.222 [2024-10-04 06:43:22.897293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:32.754 [2024-10-04 06:43:24.897324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.754 [2024-10-04 06:43:24.897374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:32.754 [2024-10-04 06:43:24.897392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:32.754 [2024-10-04 06:43:24.897400] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:32.754 [2024-10-04 06:43:24.897418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.319 00:25:33.319 Latency(us) 00:25:33.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.319 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:33.319 NVMe0n1 : 8.14 3238.07 12.65 15.73 0.00 39291.00 2859.75 7015926.69 00:25:33.319 =================================================================================================================== 00:25:33.319 Total : 3238.07 12.65 15.73 0.00 39291.00 2859.75 7015926.69 00:25:33.319 0 00:25:33.319 06:43:25 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.319 Attaching 5 probes... 00:25:33.319 1321.344324: reset bdev controller NVMe0 00:25:33.319 1321.471705: reconnect bdev controller NVMe0 00:25:33.319 3321.736180: reconnect delay bdev controller NVMe0 00:25:33.319 3321.755188: reconnect bdev controller NVMe0 00:25:33.319 5322.137617: reconnect delay bdev controller NVMe0 00:25:33.319 5322.151383: reconnect bdev controller NVMe0 00:25:33.319 7322.424207: reconnect delay bdev controller NVMe0 00:25:33.319 7322.437227: reconnect bdev controller NVMe0 00:25:33.319 06:43:25 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:33.319 06:43:25 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:33.319 06:43:25 -- host/timeout.sh@136 -- # kill 100426 00:25:33.319 06:43:25 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.319 06:43:25 -- host/timeout.sh@139 -- # killprocess 100398 00:25:33.319 06:43:25 -- common/autotest_common.sh@926 -- # '[' -z 100398 ']' 00:25:33.319 06:43:25 -- common/autotest_common.sh@930 -- # kill -0 100398 00:25:33.319 06:43:25 -- common/autotest_common.sh@931 -- # uname 00:25:33.319 06:43:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:33.319 06:43:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100398 00:25:33.319 killing process with pid 100398 00:25:33.319 Received shutdown signal, test time was about 8.204494 seconds 00:25:33.319 00:25:33.319 Latency(us) 00:25:33.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.319 =================================================================================================================== 00:25:33.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.319 06:43:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:25:33.319 06:43:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:25:33.319 06:43:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100398' 00:25:33.319 06:43:25 -- common/autotest_common.sh@945 -- # kill 100398 00:25:33.319 06:43:25 -- common/autotest_common.sh@950 -- # wait 100398 00:25:33.577 06:43:26 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.835 06:43:26 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:33.835 06:43:26 -- host/timeout.sh@145 -- # nvmftestfini 00:25:33.835 06:43:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:33.835 06:43:26 -- nvmf/common.sh@116 -- # sync 00:25:34.111 06:43:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:34.111 06:43:26 -- nvmf/common.sh@119 -- # set +e 00:25:34.111 06:43:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:34.111 06:43:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:34.111 rmmod nvme_tcp 00:25:34.111 rmmod nvme_fabrics 00:25:34.111 rmmod nvme_keyring 00:25:34.111 06:43:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:34.111 06:43:26 -- nvmf/common.sh@123 -- # set -e 00:25:34.111 06:43:26 -- nvmf/common.sh@124 -- # return 0 00:25:34.111 06:43:26 -- nvmf/common.sh@477 -- # '[' -n 99815 ']' 00:25:34.111 06:43:26 -- nvmf/common.sh@478 -- # killprocess 99815 00:25:34.111 06:43:26 -- common/autotest_common.sh@926 -- # '[' -z 99815 ']' 00:25:34.111 06:43:26 -- common/autotest_common.sh@930 -- # kill -0 99815 00:25:34.111 06:43:26 -- common/autotest_common.sh@931 -- # uname 00:25:34.111 06:43:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:34.111 06:43:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99815 00:25:34.111 killing process with pid 99815 00:25:34.111 06:43:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:34.111 06:43:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:34.111 06:43:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99815' 00:25:34.111 06:43:26 -- common/autotest_common.sh@945 -- # kill 99815 00:25:34.111 06:43:26 -- common/autotest_common.sh@950 -- # wait 99815 00:25:34.371 06:43:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:34.371 06:43:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:34.371 06:43:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:34.371 06:43:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.371 06:43:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:34.371 06:43:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.371 06:43:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.371 06:43:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.371 06:43:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:34.371 00:25:34.371 real 0m47.641s 00:25:34.371 user 2m19.993s 00:25:34.371 sys 0m5.319s 00:25:34.371 06:43:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.371 ************************************ 00:25:34.371 END TEST nvmf_timeout 00:25:34.371 ************************************ 00:25:34.371 06:43:26 -- common/autotest_common.sh@10 -- # set +x 00:25:34.371 06:43:27 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:34.371 06:43:27 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:34.371 06:43:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:34.371 06:43:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.630 06:43:27 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:34.630 00:25:34.630 real 17m34.345s 00:25:34.630 user 56m8.378s 00:25:34.630 sys 3m44.433s 00:25:34.630 06:43:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.630 ************************************ 00:25:34.630 06:43:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.630 END TEST nvmf_tcp 00:25:34.630 ************************************ 00:25:34.630 06:43:27 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:25:34.630 06:43:27 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:34.630 06:43:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:34.630 06:43:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:34.630 06:43:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.630 ************************************ 00:25:34.630 START TEST spdkcli_nvmf_tcp 00:25:34.630 ************************************ 00:25:34.630 06:43:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:34.630 * Looking for test storage... 00:25:34.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:34.630 06:43:27 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:34.630 06:43:27 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:34.630 06:43:27 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:34.630 06:43:27 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:34.630 06:43:27 -- nvmf/common.sh@7 -- # uname -s 00:25:34.630 06:43:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.630 06:43:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.630 06:43:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.630 06:43:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.630 06:43:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.630 06:43:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.630 06:43:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.630 06:43:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.630 06:43:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.630 06:43:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.630 06:43:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:25:34.630 06:43:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:25:34.630 06:43:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.630 06:43:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.630 06:43:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:34.630 06:43:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:34.630 06:43:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.630 06:43:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.630 06:43:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.630 06:43:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.630 06:43:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.630 06:43:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.630 06:43:27 -- paths/export.sh@5 -- # export PATH 00:25:34.630 06:43:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.630 06:43:27 -- nvmf/common.sh@46 -- # : 0 00:25:34.630 06:43:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:34.630 06:43:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:34.630 06:43:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:34.630 06:43:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.630 06:43:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.630 06:43:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:34.630 06:43:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:34.630 06:43:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:34.630 06:43:27 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:34.630 06:43:27 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:34.630 06:43:27 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:34.630 06:43:27 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:34.630 06:43:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:34.630 06:43:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.630 06:43:27 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:34.630 06:43:27 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100696 00:25:34.630 06:43:27 -- spdkcli/common.sh@34 -- # waitforlisten 100696 00:25:34.630 06:43:27 -- common/autotest_common.sh@819 -- # '[' -z 100696 ']' 00:25:34.630 06:43:27 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:34.630 06:43:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.630 06:43:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:34.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.630 06:43:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.630 06:43:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:34.630 06:43:27 -- common/autotest_common.sh@10 -- # set +x 00:25:34.630 [2024-10-04 06:43:27.283477] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:25:34.630 [2024-10-04 06:43:27.283587] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100696 ] 00:25:34.888 [2024-10-04 06:43:27.420176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:34.888 [2024-10-04 06:43:27.487497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:34.888 [2024-10-04 06:43:27.487791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.889 [2024-10-04 06:43:27.487801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.824 06:43:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:35.824 06:43:28 -- common/autotest_common.sh@852 -- # return 0 00:25:35.824 06:43:28 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:35.824 06:43:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:35.824 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.824 06:43:28 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:35.824 06:43:28 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:35.824 06:43:28 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:35.824 06:43:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:35.824 06:43:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.824 06:43:28 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:35.824 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:35.824 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:35.824 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:35.824 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:35.824 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:35.824 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:35.824 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:35.824 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:35.824 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:35.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:35.824 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:35.824 ' 00:25:36.392 [2024-10-04 06:43:28.884158] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:38.922 [2024-10-04 06:43:31.172401] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.856 [2024-10-04 06:43:32.466216] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:42.396 [2024-10-04 06:43:34.861301] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:44.311 [2024-10-04 06:43:36.924065] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:46.213 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:46.213 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:46.213 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:46.213 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:46.213 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:46.213 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:46.213 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:46.213 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:46.213 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:46.213 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:46.213 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:46.213 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:46.213 06:43:38 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:46.213 06:43:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:46.213 06:43:38 -- common/autotest_common.sh@10 -- # set +x 00:25:46.213 06:43:38 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:46.213 06:43:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:46.213 06:43:38 -- common/autotest_common.sh@10 -- # set +x 00:25:46.213 06:43:38 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:46.213 06:43:38 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:46.472 06:43:39 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:46.731 06:43:39 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:46.731 06:43:39 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:46.731 06:43:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:46.731 06:43:39 -- common/autotest_common.sh@10 -- # set +x 00:25:46.731 06:43:39 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:46.731 06:43:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:46.731 06:43:39 -- common/autotest_common.sh@10 -- # set +x 00:25:46.731 06:43:39 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:46.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:46.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:46.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:46.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:46.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:46.731 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:46.731 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:46.731 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:46.731 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:46.731 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:46.731 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:46.731 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:46.731 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:46.731 ' 00:25:53.293 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:53.293 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:53.293 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:53.293 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:53.293 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:53.293 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:53.293 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:53.293 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:53.293 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:53.293 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:53.293 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:53.293 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:53.293 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:53.293 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:53.293 06:43:44 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:53.293 06:43:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:53.293 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:25:53.293 06:43:44 -- spdkcli/nvmf.sh@90 -- # killprocess 100696 00:25:53.293 06:43:44 -- common/autotest_common.sh@926 -- # '[' -z 100696 ']' 00:25:53.293 06:43:44 -- common/autotest_common.sh@930 -- # kill -0 100696 00:25:53.293 06:43:44 -- common/autotest_common.sh@931 -- # uname 00:25:53.293 06:43:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:53.293 06:43:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100696 00:25:53.293 06:43:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:53.293 06:43:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:53.293 06:43:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100696' 00:25:53.293 killing process with pid 100696 00:25:53.293 06:43:44 -- common/autotest_common.sh@945 -- # kill 100696 00:25:53.293 [2024-10-04 06:43:44.928983] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:53.293 06:43:44 -- common/autotest_common.sh@950 -- # wait 100696 00:25:53.293 06:43:45 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:53.293 06:43:45 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:53.293 06:43:45 -- spdkcli/common.sh@13 -- # '[' -n 100696 ']' 00:25:53.293 06:43:45 -- spdkcli/common.sh@14 -- # killprocess 100696 00:25:53.293 06:43:45 -- common/autotest_common.sh@926 -- # '[' -z 100696 ']' 00:25:53.293 06:43:45 -- common/autotest_common.sh@930 -- # kill -0 100696 00:25:53.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (100696) - No such process 00:25:53.294 Process with pid 100696 is not found 00:25:53.294 06:43:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 100696 is not found' 00:25:53.294 06:43:45 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:53.294 06:43:45 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:53.294 06:43:45 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:53.294 00:25:53.294 real 0m18.071s 00:25:53.294 user 0m39.429s 00:25:53.294 sys 0m0.962s 00:25:53.294 06:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.294 06:43:45 -- common/autotest_common.sh@10 -- # set +x 00:25:53.294 ************************************ 00:25:53.294 END TEST spdkcli_nvmf_tcp 00:25:53.294 ************************************ 00:25:53.294 06:43:45 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:53.294 06:43:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:53.294 06:43:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:53.294 06:43:45 -- common/autotest_common.sh@10 -- # set +x 00:25:53.294 ************************************ 00:25:53.294 START TEST nvmf_identify_passthru 00:25:53.294 ************************************ 00:25:53.294 06:43:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:53.294 * Looking for test storage... 00:25:53.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:53.294 06:43:45 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:53.294 06:43:45 -- nvmf/common.sh@7 -- # uname -s 00:25:53.294 06:43:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.294 06:43:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.294 06:43:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.294 06:43:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.294 06:43:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.294 06:43:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.294 06:43:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.294 06:43:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.294 06:43:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.294 06:43:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.294 06:43:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:25:53.294 06:43:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:25:53.294 06:43:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.294 06:43:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.294 06:43:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:53.294 06:43:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.294 06:43:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.294 06:43:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.294 06:43:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.294 06:43:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- paths/export.sh@5 -- # export PATH 00:25:53.294 06:43:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- nvmf/common.sh@46 -- # : 0 00:25:53.294 06:43:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:53.294 06:43:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:53.294 06:43:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:53.294 06:43:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.294 06:43:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.294 06:43:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:53.294 06:43:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:53.294 06:43:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:53.294 06:43:45 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.294 06:43:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.294 06:43:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.294 06:43:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.294 06:43:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- paths/export.sh@5 -- # export PATH 00:25:53.294 06:43:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.294 06:43:45 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:53.294 06:43:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:53.294 06:43:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.294 06:43:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:53.294 06:43:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:53.294 06:43:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:53.294 06:43:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.294 06:43:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:53.294 06:43:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.294 06:43:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:53.294 06:43:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:53.294 06:43:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:53.294 06:43:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:53.294 06:43:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:53.294 06:43:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:53.294 06:43:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.294 06:43:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.294 06:43:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:53.294 06:43:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:53.294 06:43:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:53.294 06:43:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:53.294 06:43:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:53.294 06:43:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.294 06:43:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:53.294 06:43:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:53.294 06:43:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:53.294 06:43:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:53.294 06:43:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:53.294 06:43:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:53.294 Cannot find device "nvmf_tgt_br" 00:25:53.294 06:43:45 -- nvmf/common.sh@154 -- # true 00:25:53.294 06:43:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.294 Cannot find device "nvmf_tgt_br2" 00:25:53.294 06:43:45 -- nvmf/common.sh@155 -- # true 00:25:53.294 06:43:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:53.294 06:43:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:53.294 Cannot find device "nvmf_tgt_br" 00:25:53.294 06:43:45 -- nvmf/common.sh@157 -- # true 00:25:53.294 06:43:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:53.294 Cannot find device "nvmf_tgt_br2" 00:25:53.294 06:43:45 -- nvmf/common.sh@158 -- # true 00:25:53.294 06:43:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:53.294 06:43:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:53.294 06:43:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.294 06:43:45 -- nvmf/common.sh@161 -- # true 00:25:53.294 06:43:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.294 06:43:45 -- nvmf/common.sh@162 -- # true 00:25:53.294 06:43:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:53.294 06:43:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:53.294 06:43:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:53.294 06:43:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:53.294 06:43:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:53.294 06:43:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:53.294 06:43:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.295 06:43:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:53.295 06:43:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:53.295 06:43:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:53.295 06:43:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:53.295 06:43:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:53.295 06:43:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:53.295 06:43:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:53.295 06:43:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:53.295 06:43:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:53.295 06:43:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:53.295 06:43:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:53.295 06:43:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:53.295 06:43:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:53.295 06:43:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:53.295 06:43:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:53.295 06:43:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:53.295 06:43:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:53.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:25:53.295 00:25:53.295 --- 10.0.0.2 ping statistics --- 00:25:53.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.295 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:53.295 06:43:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:53.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:53.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:25:53.295 00:25:53.295 --- 10.0.0.3 ping statistics --- 00:25:53.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.295 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:53.295 06:43:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:53.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:25:53.295 00:25:53.295 --- 10.0.0.1 ping statistics --- 00:25:53.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.295 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:53.295 06:43:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.295 06:43:45 -- nvmf/common.sh@421 -- # return 0 00:25:53.295 06:43:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:53.295 06:43:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.295 06:43:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:53.295 06:43:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:53.295 06:43:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.295 06:43:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:53.295 06:43:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:53.295 06:43:45 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:53.295 06:43:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:53.295 06:43:45 -- common/autotest_common.sh@10 -- # set +x 00:25:53.295 06:43:45 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:53.295 06:43:45 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:53.295 06:43:45 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:53.295 06:43:45 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:53.295 06:43:45 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:53.295 06:43:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:53.295 06:43:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:53.295 06:43:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:53.295 06:43:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:53.295 06:43:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:53.295 06:43:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:53.295 06:43:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:53.295 06:43:45 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:53.295 06:43:45 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:53.295 06:43:45 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:53.295 06:43:45 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:53.295 06:43:45 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:53.295 06:43:45 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:53.295 06:43:45 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:53.295 06:43:45 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:53.295 06:43:45 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:53.295 06:43:45 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:53.553 06:43:46 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:53.553 06:43:46 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:53.553 06:43:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:53.553 06:43:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.553 06:43:46 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:53.553 06:43:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:53.553 06:43:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.554 06:43:46 -- target/identify_passthru.sh@31 -- # nvmfpid=101200 00:25:53.554 06:43:46 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:53.554 06:43:46 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.554 06:43:46 -- target/identify_passthru.sh@35 -- # waitforlisten 101200 00:25:53.554 06:43:46 -- common/autotest_common.sh@819 -- # '[' -z 101200 ']' 00:25:53.554 06:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.554 06:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:53.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.554 06:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.554 06:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:53.554 06:43:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.554 [2024-10-04 06:43:46.227372] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:25:53.554 [2024-10-04 06:43:46.227487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.811 [2024-10-04 06:43:46.367397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.811 [2024-10-04 06:43:46.451888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:53.811 [2024-10-04 06:43:46.452052] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.811 [2024-10-04 06:43:46.452065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.811 [2024-10-04 06:43:46.452074] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.811 [2024-10-04 06:43:46.452240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.811 [2024-10-04 06:43:46.452810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.811 [2024-10-04 06:43:46.452921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.811 [2024-10-04 06:43:46.452928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.745 06:43:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:54.745 06:43:47 -- common/autotest_common.sh@852 -- # return 0 00:25:54.745 06:43:47 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:54.745 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.745 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:54.745 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.745 06:43:47 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:54.745 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.745 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:54.745 [2024-10-04 06:43:47.350730] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:54.745 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.745 06:43:47 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.745 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.745 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:54.745 [2024-10-04 06:43:47.360922] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.745 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.745 06:43:47 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:54.745 06:43:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:54.746 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:54.746 06:43:47 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:54.746 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.746 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:55.004 Nvme0n1 00:25:55.004 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.004 06:43:47 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:55.004 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.004 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:55.004 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.004 06:43:47 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:55.004 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.004 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:55.004 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.004 06:43:47 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.004 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.004 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:55.004 [2024-10-04 06:43:47.501366] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.004 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.004 06:43:47 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:55.004 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.004 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:55.004 [2024-10-04 06:43:47.509067] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:55.004 [ 00:25:55.004 { 00:25:55.004 "allow_any_host": true, 00:25:55.004 "hosts": [], 00:25:55.004 "listen_addresses": [], 00:25:55.004 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:55.004 "subtype": "Discovery" 00:25:55.004 }, 00:25:55.004 { 00:25:55.004 "allow_any_host": true, 00:25:55.004 "hosts": [], 00:25:55.004 "listen_addresses": [ 00:25:55.004 { 00:25:55.004 "adrfam": "IPv4", 00:25:55.004 "traddr": "10.0.0.2", 00:25:55.004 "transport": "TCP", 00:25:55.004 "trsvcid": "4420", 00:25:55.004 "trtype": "TCP" 00:25:55.004 } 00:25:55.004 ], 00:25:55.004 "max_cntlid": 65519, 00:25:55.004 "max_namespaces": 1, 00:25:55.004 "min_cntlid": 1, 00:25:55.004 "model_number": "SPDK bdev Controller", 00:25:55.004 "namespaces": [ 00:25:55.004 { 00:25:55.004 "bdev_name": "Nvme0n1", 00:25:55.004 "name": "Nvme0n1", 00:25:55.004 "nguid": "048B4973C98A4956865B768947073786", 00:25:55.004 "nsid": 1, 00:25:55.004 "uuid": "048b4973-c98a-4956-865b-768947073786" 00:25:55.004 } 00:25:55.004 ], 00:25:55.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.004 "serial_number": "SPDK00000000000001", 00:25:55.004 "subtype": "NVMe" 00:25:55.004 } 00:25:55.004 ] 00:25:55.004 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.004 06:43:47 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:55.004 06:43:47 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:55.004 06:43:47 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:55.262 06:43:47 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:55.262 06:43:47 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:55.262 06:43:47 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:55.262 06:43:47 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:55.262 06:43:47 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:55.262 06:43:47 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:55.262 06:43:47 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:55.262 06:43:47 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.262 06:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.262 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:25:55.520 06:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.520 06:43:47 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:55.520 06:43:47 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:55.520 06:43:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:55.520 06:43:47 -- nvmf/common.sh@116 -- # sync 00:25:55.521 06:43:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:55.521 06:43:48 -- nvmf/common.sh@119 -- # set +e 00:25:55.521 06:43:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:55.521 06:43:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:55.521 rmmod nvme_tcp 00:25:55.521 rmmod nvme_fabrics 00:25:55.521 rmmod nvme_keyring 00:25:55.521 06:43:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:55.521 06:43:48 -- nvmf/common.sh@123 -- # set -e 00:25:55.521 06:43:48 -- nvmf/common.sh@124 -- # return 0 00:25:55.521 06:43:48 -- nvmf/common.sh@477 -- # '[' -n 101200 ']' 00:25:55.521 06:43:48 -- nvmf/common.sh@478 -- # killprocess 101200 00:25:55.521 06:43:48 -- common/autotest_common.sh@926 -- # '[' -z 101200 ']' 00:25:55.521 06:43:48 -- common/autotest_common.sh@930 -- # kill -0 101200 00:25:55.521 06:43:48 -- common/autotest_common.sh@931 -- # uname 00:25:55.521 06:43:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:55.521 06:43:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101200 00:25:55.521 killing process with pid 101200 00:25:55.521 06:43:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:55.521 06:43:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:55.521 06:43:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101200' 00:25:55.521 06:43:48 -- common/autotest_common.sh@945 -- # kill 101200 00:25:55.521 [2024-10-04 06:43:48.129272] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:55.521 06:43:48 -- common/autotest_common.sh@950 -- # wait 101200 00:25:55.778 06:43:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:55.778 06:43:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:55.778 06:43:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:55.778 06:43:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.778 06:43:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:55.778 06:43:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.778 06:43:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:55.778 06:43:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.779 06:43:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:55.779 00:25:55.779 real 0m3.187s 00:25:55.779 user 0m8.106s 00:25:55.779 sys 0m0.855s 00:25:55.779 06:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.779 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:25:55.779 ************************************ 00:25:55.779 END TEST nvmf_identify_passthru 00:25:55.779 ************************************ 00:25:56.037 06:43:48 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:56.037 06:43:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:56.037 06:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:56.037 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:25:56.037 ************************************ 00:25:56.037 START TEST nvmf_dif 00:25:56.037 ************************************ 00:25:56.037 06:43:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:56.037 * Looking for test storage... 00:25:56.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:56.037 06:43:48 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:56.037 06:43:48 -- nvmf/common.sh@7 -- # uname -s 00:25:56.037 06:43:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.037 06:43:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.037 06:43:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.037 06:43:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.037 06:43:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.037 06:43:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.037 06:43:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.037 06:43:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.037 06:43:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.037 06:43:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.037 06:43:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:25:56.037 06:43:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:25:56.037 06:43:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.037 06:43:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.037 06:43:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:56.037 06:43:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:56.037 06:43:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.037 06:43:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.037 06:43:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.037 06:43:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.037 06:43:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.037 06:43:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.037 06:43:48 -- paths/export.sh@5 -- # export PATH 00:25:56.037 06:43:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.037 06:43:48 -- nvmf/common.sh@46 -- # : 0 00:25:56.037 06:43:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:56.037 06:43:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:56.037 06:43:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:56.037 06:43:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.037 06:43:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.037 06:43:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:56.037 06:43:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:56.037 06:43:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:56.037 06:43:48 -- target/dif.sh@15 -- # NULL_META=16 00:25:56.037 06:43:48 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:56.037 06:43:48 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:56.037 06:43:48 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:56.037 06:43:48 -- target/dif.sh@135 -- # nvmftestinit 00:25:56.037 06:43:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:56.037 06:43:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.037 06:43:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:56.037 06:43:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:56.037 06:43:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:56.037 06:43:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.037 06:43:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:56.037 06:43:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.037 06:43:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:56.037 06:43:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:56.037 06:43:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:56.037 06:43:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:56.037 06:43:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:56.037 06:43:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:56.037 06:43:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.037 06:43:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.037 06:43:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:56.037 06:43:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:56.037 06:43:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:56.037 06:43:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:56.037 06:43:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:56.037 06:43:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.037 06:43:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:56.037 06:43:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:56.037 06:43:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:56.037 06:43:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:56.037 06:43:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:56.037 06:43:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:56.037 Cannot find device "nvmf_tgt_br" 00:25:56.037 06:43:48 -- nvmf/common.sh@154 -- # true 00:25:56.037 06:43:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:56.037 Cannot find device "nvmf_tgt_br2" 00:25:56.037 06:43:48 -- nvmf/common.sh@155 -- # true 00:25:56.037 06:43:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:56.037 06:43:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:56.037 Cannot find device "nvmf_tgt_br" 00:25:56.037 06:43:48 -- nvmf/common.sh@157 -- # true 00:25:56.037 06:43:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:56.037 Cannot find device "nvmf_tgt_br2" 00:25:56.037 06:43:48 -- nvmf/common.sh@158 -- # true 00:25:56.037 06:43:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:56.037 06:43:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:56.037 06:43:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:56.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:56.295 06:43:48 -- nvmf/common.sh@161 -- # true 00:25:56.295 06:43:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:56.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:56.295 06:43:48 -- nvmf/common.sh@162 -- # true 00:25:56.295 06:43:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:56.295 06:43:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:56.295 06:43:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:56.295 06:43:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:56.295 06:43:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:56.295 06:43:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:56.295 06:43:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:56.295 06:43:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:56.295 06:43:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:56.295 06:43:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:56.295 06:43:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:56.295 06:43:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:56.295 06:43:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:56.296 06:43:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:56.296 06:43:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:56.296 06:43:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:56.296 06:43:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:56.296 06:43:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:56.296 06:43:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:56.296 06:43:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:56.296 06:43:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:56.296 06:43:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:56.296 06:43:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:56.296 06:43:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:56.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:25:56.296 00:25:56.296 --- 10.0.0.2 ping statistics --- 00:25:56.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.296 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:56.296 06:43:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:56.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:56.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:25:56.296 00:25:56.296 --- 10.0.0.3 ping statistics --- 00:25:56.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.296 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:56.296 06:43:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:56.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:56.296 00:25:56.296 --- 10.0.0.1 ping statistics --- 00:25:56.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.296 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:56.296 06:43:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.296 06:43:48 -- nvmf/common.sh@421 -- # return 0 00:25:56.296 06:43:48 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:56.296 06:43:48 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:56.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:56.811 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:56.811 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:56.811 06:43:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.811 06:43:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:56.811 06:43:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:56.811 06:43:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.811 06:43:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:56.811 06:43:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:56.811 06:43:49 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:56.811 06:43:49 -- target/dif.sh@137 -- # nvmfappstart 00:25:56.811 06:43:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:56.811 06:43:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:56.811 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:25:56.811 06:43:49 -- nvmf/common.sh@469 -- # nvmfpid=101555 00:25:56.811 06:43:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:56.811 06:43:49 -- nvmf/common.sh@470 -- # waitforlisten 101555 00:25:56.811 06:43:49 -- common/autotest_common.sh@819 -- # '[' -z 101555 ']' 00:25:56.811 06:43:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.811 06:43:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:56.811 06:43:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.811 06:43:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:56.811 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:25:56.811 [2024-10-04 06:43:49.387253] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:25:56.811 [2024-10-04 06:43:49.387345] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.069 [2024-10-04 06:43:49.524671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.069 [2024-10-04 06:43:49.597780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:57.069 [2024-10-04 06:43:49.597940] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.069 [2024-10-04 06:43:49.597953] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.069 [2024-10-04 06:43:49.597962] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.069 [2024-10-04 06:43:49.597987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.002 06:43:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:58.002 06:43:50 -- common/autotest_common.sh@852 -- # return 0 00:25:58.002 06:43:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:58.002 06:43:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:58.002 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:58.002 06:43:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.002 06:43:50 -- target/dif.sh@139 -- # create_transport 00:25:58.002 06:43:50 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:58.002 06:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.002 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:58.002 [2024-10-04 06:43:50.475245] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.002 06:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.002 06:43:50 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:58.002 06:43:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:58.002 06:43:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:58.002 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:58.002 ************************************ 00:25:58.002 START TEST fio_dif_1_default 00:25:58.002 ************************************ 00:25:58.002 06:43:50 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:58.002 06:43:50 -- target/dif.sh@86 -- # create_subsystems 0 00:25:58.002 06:43:50 -- target/dif.sh@28 -- # local sub 00:25:58.002 06:43:50 -- target/dif.sh@30 -- # for sub in "$@" 00:25:58.002 06:43:50 -- target/dif.sh@31 -- # create_subsystem 0 00:25:58.002 06:43:50 -- target/dif.sh@18 -- # local sub_id=0 00:25:58.002 06:43:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:58.002 06:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.002 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:58.002 bdev_null0 00:25:58.002 06:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.002 06:43:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:58.002 06:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.002 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:58.002 06:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.002 06:43:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:58.002 06:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.002 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:58.002 06:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.002 06:43:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.002 06:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.002 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:58.002 [2024-10-04 06:43:50.519478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.002 06:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.002 06:43:50 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:58.002 06:43:50 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:58.002 06:43:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:58.002 06:43:50 -- nvmf/common.sh@520 -- # config=() 00:25:58.002 06:43:50 -- nvmf/common.sh@520 -- # local subsystem config 00:25:58.002 06:43:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.002 06:43:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:58.002 06:43:50 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.002 06:43:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:58.002 { 00:25:58.002 "params": { 00:25:58.002 "name": "Nvme$subsystem", 00:25:58.002 "trtype": "$TEST_TRANSPORT", 00:25:58.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:58.002 "adrfam": "ipv4", 00:25:58.002 "trsvcid": "$NVMF_PORT", 00:25:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:58.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:58.002 "hdgst": ${hdgst:-false}, 00:25:58.002 "ddgst": ${ddgst:-false} 00:25:58.002 }, 00:25:58.002 "method": "bdev_nvme_attach_controller" 00:25:58.002 } 00:25:58.002 EOF 00:25:58.002 )") 00:25:58.002 06:43:50 -- target/dif.sh@82 -- # gen_fio_conf 00:25:58.002 06:43:50 -- target/dif.sh@54 -- # local file 00:25:58.002 06:43:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:58.002 06:43:50 -- target/dif.sh@56 -- # cat 00:25:58.002 06:43:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.002 06:43:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:58.002 06:43:50 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.002 06:43:50 -- common/autotest_common.sh@1320 -- # shift 00:25:58.002 06:43:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:58.002 06:43:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.002 06:43:50 -- nvmf/common.sh@542 -- # cat 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:58.002 06:43:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:58.002 06:43:50 -- target/dif.sh@72 -- # (( file <= files )) 00:25:58.002 06:43:50 -- nvmf/common.sh@544 -- # jq . 00:25:58.002 06:43:50 -- nvmf/common.sh@545 -- # IFS=, 00:25:58.002 06:43:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:58.002 "params": { 00:25:58.002 "name": "Nvme0", 00:25:58.002 "trtype": "tcp", 00:25:58.002 "traddr": "10.0.0.2", 00:25:58.002 "adrfam": "ipv4", 00:25:58.002 "trsvcid": "4420", 00:25:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:58.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:58.002 "hdgst": false, 00:25:58.002 "ddgst": false 00:25:58.002 }, 00:25:58.002 "method": "bdev_nvme_attach_controller" 00:25:58.002 }' 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:58.002 06:43:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:58.002 06:43:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:58.002 06:43:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:58.002 06:43:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:58.002 06:43:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:58.002 06:43:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:58.260 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:58.260 fio-3.35 00:25:58.260 Starting 1 thread 00:25:58.517 [2024-10-04 06:43:51.183019] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:58.517 [2024-10-04 06:43:51.183107] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:10.739 00:26:10.739 filename0: (groupid=0, jobs=1): err= 0: pid=101640: Fri Oct 4 06:44:01 2024 00:26:10.739 read: IOPS=5010, BW=19.6MiB/s (20.5MB/s)(196MiB/10001msec) 00:26:10.739 slat (nsec): min=5740, max=59491, avg=6970.50, stdev=2259.65 00:26:10.739 clat (usec): min=338, max=41942, avg=777.49, stdev=3995.88 00:26:10.739 lat (usec): min=344, max=41952, avg=784.46, stdev=3995.94 00:26:10.739 clat percentiles (usec): 00:26:10.739 | 1.00th=[ 347], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:26:10.739 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:26:10.739 | 70.00th=[ 392], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 433], 00:26:10.739 | 99.00th=[ 799], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:26:10.739 | 99.99th=[41681] 00:26:10.739 bw ( KiB/s): min=12928, max=32864, per=97.31%, avg=19503.16, stdev=4989.13, samples=19 00:26:10.739 iops : min= 3232, max= 8216, avg=4875.79, stdev=1247.28, samples=19 00:26:10.739 lat (usec) : 500=98.89%, 750=0.10%, 1000=0.02% 00:26:10.739 lat (msec) : 2=0.01%, 50=0.98% 00:26:10.739 cpu : usr=89.07%, sys=9.07%, ctx=24, majf=0, minf=8 00:26:10.739 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.739 issued rwts: total=50108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.739 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:10.739 00:26:10.739 Run status group 0 (all jobs): 00:26:10.739 READ: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=196MiB (205MB), run=10001-10001msec 00:26:10.739 06:44:01 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:10.739 06:44:01 -- target/dif.sh@43 -- # local sub 00:26:10.739 06:44:01 -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.739 06:44:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:10.739 06:44:01 -- target/dif.sh@36 -- # local sub_id=0 00:26:10.739 06:44:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 00:26:10.739 real 0m11.094s 00:26:10.739 user 0m9.600s 00:26:10.739 sys 0m1.216s 00:26:10.739 06:44:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 ************************************ 00:26:10.739 END TEST fio_dif_1_default 00:26:10.739 ************************************ 00:26:10.739 06:44:01 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:10.739 06:44:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:10.739 06:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 ************************************ 00:26:10.739 START TEST fio_dif_1_multi_subsystems 00:26:10.739 ************************************ 00:26:10.739 06:44:01 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:26:10.739 06:44:01 -- target/dif.sh@92 -- # local files=1 00:26:10.739 06:44:01 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:10.739 06:44:01 -- target/dif.sh@28 -- # local sub 00:26:10.739 06:44:01 -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.739 06:44:01 -- target/dif.sh@31 -- # create_subsystem 0 00:26:10.739 06:44:01 -- target/dif.sh@18 -- # local sub_id=0 00:26:10.739 06:44:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 bdev_null0 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 [2024-10-04 06:44:01.667407] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.739 06:44:01 -- target/dif.sh@31 -- # create_subsystem 1 00:26:10.739 06:44:01 -- target/dif.sh@18 -- # local sub_id=1 00:26:10.739 06:44:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 bdev_null1 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.739 06:44:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.739 06:44:01 -- common/autotest_common.sh@10 -- # set +x 00:26:10.739 06:44:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.739 06:44:01 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:10.739 06:44:01 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:10.739 06:44:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:10.739 06:44:01 -- nvmf/common.sh@520 -- # config=() 00:26:10.739 06:44:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.739 06:44:01 -- nvmf/common.sh@520 -- # local subsystem config 00:26:10.739 06:44:01 -- target/dif.sh@82 -- # gen_fio_conf 00:26:10.739 06:44:01 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.739 06:44:01 -- target/dif.sh@54 -- # local file 00:26:10.739 06:44:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:10.739 06:44:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.739 06:44:01 -- target/dif.sh@56 -- # cat 00:26:10.739 06:44:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:10.739 06:44:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.739 { 00:26:10.739 "params": { 00:26:10.739 "name": "Nvme$subsystem", 00:26:10.739 "trtype": "$TEST_TRANSPORT", 00:26:10.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.739 "adrfam": "ipv4", 00:26:10.739 "trsvcid": "$NVMF_PORT", 00:26:10.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.739 "hdgst": ${hdgst:-false}, 00:26:10.739 "ddgst": ${ddgst:-false} 00:26:10.739 }, 00:26:10.739 "method": "bdev_nvme_attach_controller" 00:26:10.739 } 00:26:10.740 EOF 00:26:10.740 )") 00:26:10.740 06:44:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:10.740 06:44:01 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:10.740 06:44:01 -- common/autotest_common.sh@1320 -- # shift 00:26:10.740 06:44:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:10.740 06:44:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.740 06:44:01 -- nvmf/common.sh@542 -- # cat 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:10.740 06:44:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:10.740 06:44:01 -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.740 06:44:01 -- target/dif.sh@73 -- # cat 00:26:10.740 06:44:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:10.740 06:44:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:10.740 { 00:26:10.740 "params": { 00:26:10.740 "name": "Nvme$subsystem", 00:26:10.740 "trtype": "$TEST_TRANSPORT", 00:26:10.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.740 "adrfam": "ipv4", 00:26:10.740 "trsvcid": "$NVMF_PORT", 00:26:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.740 "hdgst": ${hdgst:-false}, 00:26:10.740 "ddgst": ${ddgst:-false} 00:26:10.740 }, 00:26:10.740 "method": "bdev_nvme_attach_controller" 00:26:10.740 } 00:26:10.740 EOF 00:26:10.740 )") 00:26:10.740 06:44:01 -- target/dif.sh@72 -- # (( file++ )) 00:26:10.740 06:44:01 -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.740 06:44:01 -- nvmf/common.sh@542 -- # cat 00:26:10.740 06:44:01 -- nvmf/common.sh@544 -- # jq . 00:26:10.740 06:44:01 -- nvmf/common.sh@545 -- # IFS=, 00:26:10.740 06:44:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:10.740 "params": { 00:26:10.740 "name": "Nvme0", 00:26:10.740 "trtype": "tcp", 00:26:10.740 "traddr": "10.0.0.2", 00:26:10.740 "adrfam": "ipv4", 00:26:10.740 "trsvcid": "4420", 00:26:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:10.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:10.740 "hdgst": false, 00:26:10.740 "ddgst": false 00:26:10.740 }, 00:26:10.740 "method": "bdev_nvme_attach_controller" 00:26:10.740 },{ 00:26:10.740 "params": { 00:26:10.740 "name": "Nvme1", 00:26:10.740 "trtype": "tcp", 00:26:10.740 "traddr": "10.0.0.2", 00:26:10.740 "adrfam": "ipv4", 00:26:10.740 "trsvcid": "4420", 00:26:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:10.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:10.740 "hdgst": false, 00:26:10.740 "ddgst": false 00:26:10.740 }, 00:26:10.740 "method": "bdev_nvme_attach_controller" 00:26:10.740 }' 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:10.740 06:44:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:10.740 06:44:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:10.740 06:44:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:10.740 06:44:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:10.740 06:44:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:10.740 06:44:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.740 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:10.740 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:10.740 fio-3.35 00:26:10.740 Starting 2 threads 00:26:10.740 [2024-10-04 06:44:02.464885] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:10.740 [2024-10-04 06:44:02.464969] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:20.710 00:26:20.710 filename0: (groupid=0, jobs=1): err= 0: pid=101800: Fri Oct 4 06:44:12 2024 00:26:20.710 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.80MiB/10015msec) 00:26:20.710 slat (nsec): min=5949, max=84887, avg=9804.00, stdev=6662.97 00:26:20.710 clat (usec): min=361, max=41921, avg=15940.92, stdev=19688.61 00:26:20.710 lat (usec): min=367, max=41943, avg=15950.72, stdev=19688.84 00:26:20.710 clat percentiles (usec): 00:26:20.710 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 412], 00:26:20.710 | 30.00th=[ 424], 40.00th=[ 441], 50.00th=[ 461], 60.00th=[ 570], 00:26:20.710 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:20.710 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:20.710 | 99.99th=[41681] 00:26:20.710 bw ( KiB/s): min= 640, max= 1504, per=53.91%, avg=1001.65, stdev=222.20, samples=20 00:26:20.710 iops : min= 160, max= 376, avg=250.40, stdev=55.55, samples=20 00:26:20.710 lat (usec) : 500=57.78%, 750=3.39%, 1000=0.40% 00:26:20.710 lat (msec) : 2=0.16%, 50=38.28% 00:26:20.710 cpu : usr=97.34%, sys=2.14%, ctx=55, majf=0, minf=0 00:26:20.710 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.710 issued rwts: total=2508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.710 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:20.710 filename1: (groupid=0, jobs=1): err= 0: pid=101801: Fri Oct 4 06:44:12 2024 00:26:20.710 read: IOPS=214, BW=857KiB/s (878kB/s)(8608KiB/10039msec) 00:26:20.710 slat (nsec): min=5800, max=73172, avg=9210.94, stdev=6109.56 00:26:20.710 clat (usec): min=355, max=41634, avg=18630.20, stdev=20140.06 00:26:20.710 lat (usec): min=361, max=41677, avg=18639.41, stdev=20139.94 00:26:20.710 clat percentiles (usec): 00:26:20.710 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 396], 00:26:20.710 | 30.00th=[ 412], 40.00th=[ 433], 50.00th=[ 474], 60.00th=[40633], 00:26:20.710 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:20.710 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:20.710 | 99.99th=[41681] 00:26:20.710 bw ( KiB/s): min= 576, max= 1152, per=46.26%, avg=859.20, stdev=155.83, samples=20 00:26:20.710 iops : min= 144, max= 288, avg=214.80, stdev=38.96, samples=20 00:26:20.710 lat (usec) : 500=52.00%, 750=2.18%, 1000=0.65% 00:26:20.710 lat (msec) : 2=0.19%, 50=44.98% 00:26:20.710 cpu : usr=97.77%, sys=1.84%, ctx=8, majf=0, minf=9 00:26:20.710 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.710 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.710 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:20.710 00:26:20.710 Run status group 0 (all jobs): 00:26:20.710 READ: bw=1857KiB/s (1901kB/s), 857KiB/s-1002KiB/s (878kB/s-1026kB/s), io=18.2MiB (19.1MB), run=10015-10039msec 00:26:20.710 06:44:12 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:20.710 06:44:12 -- target/dif.sh@43 -- # local sub 00:26:20.710 06:44:12 -- target/dif.sh@45 -- # for sub in "$@" 00:26:20.710 06:44:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:20.710 06:44:12 -- target/dif.sh@36 -- # local sub_id=0 00:26:20.710 06:44:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:20.710 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.710 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.710 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.710 06:44:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:20.710 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.710 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.710 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.710 06:44:12 -- target/dif.sh@45 -- # for sub in "$@" 00:26:20.710 06:44:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:20.710 06:44:12 -- target/dif.sh@36 -- # local sub_id=1 00:26:20.710 06:44:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.710 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.710 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.710 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.710 06:44:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:20.710 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.710 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.710 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.710 00:26:20.710 real 0m11.243s 00:26:20.710 user 0m20.439s 00:26:20.710 sys 0m0.690s 00:26:20.710 06:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.710 ************************************ 00:26:20.710 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.710 END TEST fio_dif_1_multi_subsystems 00:26:20.710 ************************************ 00:26:20.710 06:44:12 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:20.710 06:44:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:20.710 06:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:20.710 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.710 ************************************ 00:26:20.710 START TEST fio_dif_rand_params 00:26:20.710 ************************************ 00:26:20.710 06:44:12 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:26:20.710 06:44:12 -- target/dif.sh@100 -- # local NULL_DIF 00:26:20.710 06:44:12 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:20.710 06:44:12 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:20.710 06:44:12 -- target/dif.sh@103 -- # bs=128k 00:26:20.710 06:44:12 -- target/dif.sh@103 -- # numjobs=3 00:26:20.710 06:44:12 -- target/dif.sh@103 -- # iodepth=3 00:26:20.710 06:44:12 -- target/dif.sh@103 -- # runtime=5 00:26:20.710 06:44:12 -- target/dif.sh@105 -- # create_subsystems 0 00:26:20.710 06:44:12 -- target/dif.sh@28 -- # local sub 00:26:20.710 06:44:12 -- target/dif.sh@30 -- # for sub in "$@" 00:26:20.710 06:44:12 -- target/dif.sh@31 -- # create_subsystem 0 00:26:20.710 06:44:12 -- target/dif.sh@18 -- # local sub_id=0 00:26:20.711 06:44:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:20.711 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.711 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.711 bdev_null0 00:26:20.711 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.711 06:44:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:20.711 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.711 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.711 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.711 06:44:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:20.711 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.711 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.711 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.711 06:44:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:20.711 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:20.711 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.711 [2024-10-04 06:44:12.972358] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.711 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.711 06:44:12 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:20.711 06:44:12 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:20.711 06:44:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:20.711 06:44:12 -- nvmf/common.sh@520 -- # config=() 00:26:20.711 06:44:12 -- nvmf/common.sh@520 -- # local subsystem config 00:26:20.711 06:44:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:20.711 06:44:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:20.711 06:44:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:20.711 { 00:26:20.711 "params": { 00:26:20.711 "name": "Nvme$subsystem", 00:26:20.711 "trtype": "$TEST_TRANSPORT", 00:26:20.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:20.711 "adrfam": "ipv4", 00:26:20.711 "trsvcid": "$NVMF_PORT", 00:26:20.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:20.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:20.711 "hdgst": ${hdgst:-false}, 00:26:20.711 "ddgst": ${ddgst:-false} 00:26:20.711 }, 00:26:20.711 "method": "bdev_nvme_attach_controller" 00:26:20.711 } 00:26:20.711 EOF 00:26:20.711 )") 00:26:20.711 06:44:12 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:20.711 06:44:12 -- target/dif.sh@82 -- # gen_fio_conf 00:26:20.711 06:44:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:20.711 06:44:12 -- target/dif.sh@54 -- # local file 00:26:20.711 06:44:12 -- target/dif.sh@56 -- # cat 00:26:20.711 06:44:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:20.711 06:44:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:20.711 06:44:12 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:20.711 06:44:12 -- common/autotest_common.sh@1320 -- # shift 00:26:20.711 06:44:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:20.711 06:44:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.711 06:44:12 -- nvmf/common.sh@542 -- # cat 00:26:20.711 06:44:12 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:20.711 06:44:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:20.711 06:44:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:20.711 06:44:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:20.711 06:44:12 -- target/dif.sh@72 -- # (( file <= files )) 00:26:20.711 06:44:12 -- nvmf/common.sh@544 -- # jq . 00:26:20.711 06:44:12 -- nvmf/common.sh@545 -- # IFS=, 00:26:20.711 06:44:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:20.711 "params": { 00:26:20.711 "name": "Nvme0", 00:26:20.711 "trtype": "tcp", 00:26:20.711 "traddr": "10.0.0.2", 00:26:20.711 "adrfam": "ipv4", 00:26:20.711 "trsvcid": "4420", 00:26:20.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:20.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:20.711 "hdgst": false, 00:26:20.711 "ddgst": false 00:26:20.711 }, 00:26:20.711 "method": "bdev_nvme_attach_controller" 00:26:20.711 }' 00:26:20.711 06:44:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:20.711 06:44:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:20.711 06:44:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:20.711 06:44:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:20.711 06:44:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:20.711 06:44:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:20.711 06:44:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:20.711 06:44:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:20.711 06:44:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:20.711 06:44:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:20.711 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:20.711 ... 00:26:20.711 fio-3.35 00:26:20.711 Starting 3 threads 00:26:20.969 [2024-10-04 06:44:13.630216] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:20.969 [2024-10-04 06:44:13.630304] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:26.282 00:26:26.282 filename0: (groupid=0, jobs=1): err= 0: pid=101957: Fri Oct 4 06:44:18 2024 00:26:26.282 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5007msec) 00:26:26.282 slat (nsec): min=6281, max=68747, avg=14488.49, stdev=6909.41 00:26:26.282 clat (usec): min=3878, max=52075, avg=11852.29, stdev=10679.25 00:26:26.282 lat (usec): min=3885, max=52087, avg=11866.78, stdev=10678.95 00:26:26.282 clat percentiles (usec): 00:26:26.282 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7767], 00:26:26.282 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9503], 00:26:26.282 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11076], 95.00th=[49021], 00:26:26.282 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[52167], 00:26:26.282 | 99.99th=[52167] 00:26:26.282 bw ( KiB/s): min=27136, max=38912, per=31.21%, avg=31189.11, stdev=4117.87, samples=9 00:26:26.282 iops : min= 212, max= 304, avg=243.56, stdev=32.14, samples=9 00:26:26.282 lat (msec) : 4=0.47%, 10=72.57%, 20=19.60%, 50=4.74%, 100=2.61% 00:26:26.282 cpu : usr=95.05%, sys=3.56%, ctx=6, majf=0, minf=0 00:26:26.282 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.282 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.282 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:26.282 filename0: (groupid=0, jobs=1): err= 0: pid=101958: Fri Oct 4 06:44:18 2024 00:26:26.282 read: IOPS=297, BW=37.1MiB/s (39.0MB/s)(186MiB/5007msec) 00:26:26.282 slat (nsec): min=5578, max=58168, avg=10987.00, stdev=6792.43 00:26:26.282 clat (usec): min=3456, max=50049, avg=10065.83, stdev=5118.51 00:26:26.282 lat (usec): min=3463, max=50055, avg=10076.82, stdev=5118.97 00:26:26.282 clat percentiles (usec): 00:26:26.282 | 1.00th=[ 3654], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 7308], 00:26:26.282 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[10421], 60.00th=[11994], 00:26:26.282 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13435], 95.00th=[14353], 00:26:26.282 | 99.00th=[44827], 99.50th=[46924], 99.90th=[49546], 99.95th=[50070], 00:26:26.282 | 99.99th=[50070] 00:26:26.282 bw ( KiB/s): min=27648, max=49920, per=38.03%, avg=38008.40, stdev=7960.85, samples=10 00:26:26.282 iops : min= 216, max= 390, avg=296.80, stdev=62.04, samples=10 00:26:26.282 lat (msec) : 4=11.63%, 10=37.57%, 20=49.60%, 50=1.14%, 100=0.07% 00:26:26.282 cpu : usr=93.75%, sys=4.43%, ctx=12, majf=0, minf=11 00:26:26.282 IO depths : 1=30.7%, 2=69.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.282 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.282 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:26.282 filename0: (groupid=0, jobs=1): err= 0: pid=101959: Fri Oct 4 06:44:18 2024 00:26:26.282 read: IOPS=234, BW=29.3MiB/s (30.8MB/s)(148MiB/5041msec) 00:26:26.282 slat (nsec): min=6195, max=79630, avg=17346.89, stdev=7971.12 00:26:26.282 clat (usec): min=3847, max=55391, avg=12758.63, stdev=10085.90 00:26:26.282 lat (usec): min=3854, max=55412, avg=12775.97, stdev=10086.42 00:26:26.282 clat percentiles (usec): 00:26:26.282 | 1.00th=[ 4228], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7439], 00:26:26.282 | 30.00th=[ 9110], 40.00th=[10552], 50.00th=[11207], 60.00th=[11600], 00:26:26.282 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13698], 95.00th=[48497], 00:26:26.282 | 99.00th=[52691], 99.50th=[52691], 99.90th=[54789], 99.95th=[55313], 00:26:26.282 | 99.99th=[55313] 00:26:26.282 bw ( KiB/s): min=18468, max=40448, per=30.23%, avg=30211.60, stdev=6151.75, samples=10 00:26:26.282 iops : min= 144, max= 316, avg=236.00, stdev=48.12, samples=10 00:26:26.282 lat (msec) : 4=0.34%, 10=33.56%, 20=59.76%, 50=2.54%, 100=3.80% 00:26:26.282 cpu : usr=93.89%, sys=4.37%, ctx=9, majf=0, minf=9 00:26:26.282 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.282 issued rwts: total=1183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.282 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:26.282 00:26:26.282 Run status group 0 (all jobs): 00:26:26.282 READ: bw=97.6MiB/s (102MB/s), 29.3MiB/s-37.1MiB/s (30.8MB/s-39.0MB/s), io=492MiB (516MB), run=5007-5041msec 00:26:26.540 06:44:18 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:26.540 06:44:18 -- target/dif.sh@43 -- # local sub 00:26:26.540 06:44:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:26.540 06:44:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:26.540 06:44:18 -- target/dif.sh@36 -- # local sub_id=0 00:26:26.540 06:44:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:26.540 06:44:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:18 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:26.540 06:44:19 -- target/dif.sh@109 -- # bs=4k 00:26:26.540 06:44:19 -- target/dif.sh@109 -- # numjobs=8 00:26:26.540 06:44:19 -- target/dif.sh@109 -- # iodepth=16 00:26:26.540 06:44:19 -- target/dif.sh@109 -- # runtime= 00:26:26.540 06:44:19 -- target/dif.sh@109 -- # files=2 00:26:26.540 06:44:19 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:26.540 06:44:19 -- target/dif.sh@28 -- # local sub 00:26:26.540 06:44:19 -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.540 06:44:19 -- target/dif.sh@31 -- # create_subsystem 0 00:26:26.540 06:44:19 -- target/dif.sh@18 -- # local sub_id=0 00:26:26.540 06:44:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 bdev_null0 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 [2024-10-04 06:44:19.042375] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.540 06:44:19 -- target/dif.sh@31 -- # create_subsystem 1 00:26:26.540 06:44:19 -- target/dif.sh@18 -- # local sub_id=1 00:26:26.540 06:44:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 bdev_null1 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.540 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.540 06:44:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:26.540 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.540 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.541 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.541 06:44:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.541 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.541 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.541 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.541 06:44:19 -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.541 06:44:19 -- target/dif.sh@31 -- # create_subsystem 2 00:26:26.541 06:44:19 -- target/dif.sh@18 -- # local sub_id=2 00:26:26.541 06:44:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:26.541 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.541 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.541 bdev_null2 00:26:26.541 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.541 06:44:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:26.541 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.541 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.541 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.541 06:44:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:26.541 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.541 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.541 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.541 06:44:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:26.541 06:44:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:26.541 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.541 06:44:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:26.541 06:44:19 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:26.541 06:44:19 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:26.541 06:44:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:26.541 06:44:19 -- nvmf/common.sh@520 -- # config=() 00:26:26.541 06:44:19 -- nvmf/common.sh@520 -- # local subsystem config 00:26:26.541 06:44:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:26.541 06:44:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:26.541 { 00:26:26.541 "params": { 00:26:26.541 "name": "Nvme$subsystem", 00:26:26.541 "trtype": "$TEST_TRANSPORT", 00:26:26.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.541 "adrfam": "ipv4", 00:26:26.541 "trsvcid": "$NVMF_PORT", 00:26:26.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.541 "hdgst": ${hdgst:-false}, 00:26:26.541 "ddgst": ${ddgst:-false} 00:26:26.541 }, 00:26:26.541 "method": "bdev_nvme_attach_controller" 00:26:26.541 } 00:26:26.541 EOF 00:26:26.541 )") 00:26:26.541 06:44:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.541 06:44:19 -- target/dif.sh@82 -- # gen_fio_conf 00:26:26.541 06:44:19 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.541 06:44:19 -- target/dif.sh@54 -- # local file 00:26:26.541 06:44:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:26.541 06:44:19 -- target/dif.sh@56 -- # cat 00:26:26.541 06:44:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.541 06:44:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:26.541 06:44:19 -- nvmf/common.sh@542 -- # cat 00:26:26.541 06:44:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.541 06:44:19 -- common/autotest_common.sh@1320 -- # shift 00:26:26.541 06:44:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:26.541 06:44:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.541 06:44:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:26.541 06:44:19 -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.541 06:44:19 -- target/dif.sh@73 -- # cat 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:26.541 06:44:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:26.541 06:44:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:26.541 { 00:26:26.541 "params": { 00:26:26.541 "name": "Nvme$subsystem", 00:26:26.541 "trtype": "$TEST_TRANSPORT", 00:26:26.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.541 "adrfam": "ipv4", 00:26:26.541 "trsvcid": "$NVMF_PORT", 00:26:26.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.541 "hdgst": ${hdgst:-false}, 00:26:26.541 "ddgst": ${ddgst:-false} 00:26:26.541 }, 00:26:26.541 "method": "bdev_nvme_attach_controller" 00:26:26.541 } 00:26:26.541 EOF 00:26:26.541 )") 00:26:26.541 06:44:19 -- nvmf/common.sh@542 -- # cat 00:26:26.541 06:44:19 -- target/dif.sh@72 -- # (( file++ )) 00:26:26.541 06:44:19 -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.541 06:44:19 -- target/dif.sh@73 -- # cat 00:26:26.541 06:44:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:26.541 06:44:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:26.541 { 00:26:26.541 "params": { 00:26:26.541 "name": "Nvme$subsystem", 00:26:26.541 "trtype": "$TEST_TRANSPORT", 00:26:26.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.541 "adrfam": "ipv4", 00:26:26.541 "trsvcid": "$NVMF_PORT", 00:26:26.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.541 "hdgst": ${hdgst:-false}, 00:26:26.541 "ddgst": ${ddgst:-false} 00:26:26.541 }, 00:26:26.541 "method": "bdev_nvme_attach_controller" 00:26:26.541 } 00:26:26.541 EOF 00:26:26.541 )") 00:26:26.541 06:44:19 -- nvmf/common.sh@542 -- # cat 00:26:26.541 06:44:19 -- target/dif.sh@72 -- # (( file++ )) 00:26:26.541 06:44:19 -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.541 06:44:19 -- nvmf/common.sh@544 -- # jq . 00:26:26.541 06:44:19 -- nvmf/common.sh@545 -- # IFS=, 00:26:26.541 06:44:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:26.541 "params": { 00:26:26.541 "name": "Nvme0", 00:26:26.541 "trtype": "tcp", 00:26:26.541 "traddr": "10.0.0.2", 00:26:26.541 "adrfam": "ipv4", 00:26:26.541 "trsvcid": "4420", 00:26:26.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.541 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:26.541 "hdgst": false, 00:26:26.541 "ddgst": false 00:26:26.541 }, 00:26:26.541 "method": "bdev_nvme_attach_controller" 00:26:26.541 },{ 00:26:26.541 "params": { 00:26:26.541 "name": "Nvme1", 00:26:26.541 "trtype": "tcp", 00:26:26.541 "traddr": "10.0.0.2", 00:26:26.541 "adrfam": "ipv4", 00:26:26.541 "trsvcid": "4420", 00:26:26.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:26.541 "hdgst": false, 00:26:26.541 "ddgst": false 00:26:26.541 }, 00:26:26.541 "method": "bdev_nvme_attach_controller" 00:26:26.541 },{ 00:26:26.541 "params": { 00:26:26.541 "name": "Nvme2", 00:26:26.541 "trtype": "tcp", 00:26:26.541 "traddr": "10.0.0.2", 00:26:26.541 "adrfam": "ipv4", 00:26:26.541 "trsvcid": "4420", 00:26:26.541 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:26.541 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:26.541 "hdgst": false, 00:26:26.541 "ddgst": false 00:26:26.541 }, 00:26:26.541 "method": "bdev_nvme_attach_controller" 00:26:26.541 }' 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:26.541 06:44:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:26.541 06:44:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:26.541 06:44:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:26.541 06:44:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:26.541 06:44:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:26.541 06:44:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.799 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:26.799 ... 00:26:26.799 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:26.799 ... 00:26:26.799 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:26.799 ... 00:26:26.799 fio-3.35 00:26:26.799 Starting 24 threads 00:26:27.367 [2024-10-04 06:44:19.973348] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:27.367 [2024-10-04 06:44:19.973432] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:39.559 00:26:39.559 filename0: (groupid=0, jobs=1): err= 0: pid=102054: Fri Oct 4 06:44:30 2024 00:26:39.559 read: IOPS=271, BW=1084KiB/s (1110kB/s)(10.6MiB/10037msec) 00:26:39.559 slat (usec): min=4, max=4025, avg=16.26, stdev=99.41 00:26:39.559 clat (msec): min=16, max=135, avg=58.87, stdev=18.25 00:26:39.559 lat (msec): min=16, max=136, avg=58.88, stdev=18.25 00:26:39.559 clat percentiles (msec): 00:26:39.559 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 44], 00:26:39.559 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 61], 00:26:39.559 | 70.00th=[ 67], 80.00th=[ 75], 90.00th=[ 83], 95.00th=[ 91], 00:26:39.559 | 99.00th=[ 107], 99.50th=[ 129], 99.90th=[ 136], 99.95th=[ 136], 00:26:39.559 | 99.99th=[ 136] 00:26:39.559 bw ( KiB/s): min= 768, max= 1328, per=4.42%, avg=1082.10, stdev=159.74, samples=20 00:26:39.559 iops : min= 192, max= 332, avg=270.50, stdev=39.92, samples=20 00:26:39.559 lat (msec) : 20=0.59%, 50=36.02%, 100=61.63%, 250=1.76% 00:26:39.559 cpu : usr=42.99%, sys=0.62%, ctx=1111, majf=0, minf=9 00:26:39.559 IO depths : 1=1.3%, 2=2.8%, 4=10.4%, 8=73.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:39.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.559 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.559 issued rwts: total=2721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.559 filename0: (groupid=0, jobs=1): err= 0: pid=102055: Fri Oct 4 06:44:30 2024 00:26:39.559 read: IOPS=239, BW=959KiB/s (982kB/s)(9604KiB/10016msec) 00:26:39.559 slat (usec): min=4, max=8033, avg=26.49, stdev=301.62 00:26:39.559 clat (msec): min=24, max=137, avg=66.55, stdev=17.15 00:26:39.559 lat (msec): min=24, max=137, avg=66.57, stdev=17.14 00:26:39.559 clat percentiles (msec): 00:26:39.559 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 56], 00:26:39.559 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 70], 00:26:39.559 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 96], 00:26:39.559 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 138], 99.95th=[ 138], 00:26:39.559 | 99.99th=[ 138] 00:26:39.559 bw ( KiB/s): min= 640, max= 1208, per=3.90%, avg=956.74, stdev=133.19, samples=19 00:26:39.559 iops : min= 160, max= 302, avg=239.16, stdev=33.30, samples=19 00:26:39.559 lat (msec) : 50=16.62%, 100=80.42%, 250=2.96% 00:26:39.559 cpu : usr=32.96%, sys=0.40%, ctx=921, majf=0, minf=9 00:26:39.559 IO depths : 1=1.6%, 2=3.8%, 4=12.7%, 8=70.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:39.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.559 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.559 issued rwts: total=2401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.559 filename0: (groupid=0, jobs=1): err= 0: pid=102056: Fri Oct 4 06:44:30 2024 00:26:39.559 read: IOPS=292, BW=1169KiB/s (1197kB/s)(11.5MiB/10043msec) 00:26:39.559 slat (usec): min=4, max=8042, avg=21.09, stdev=240.44 00:26:39.559 clat (msec): min=5, max=133, avg=54.60, stdev=18.03 00:26:39.559 lat (msec): min=5, max=133, avg=54.62, stdev=18.04 00:26:39.559 clat percentiles (msec): 00:26:39.559 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 40], 00:26:39.559 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 58], 00:26:39.559 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 86], 00:26:39.559 | 99.00th=[ 108], 99.50th=[ 118], 99.90th=[ 134], 99.95th=[ 134], 00:26:39.559 | 99.99th=[ 134] 00:26:39.559 bw ( KiB/s): min= 768, max= 1584, per=4.77%, avg=1168.75, stdev=182.81, samples=20 00:26:39.559 iops : min= 192, max= 396, avg=292.15, stdev=45.71, samples=20 00:26:39.559 lat (msec) : 10=1.09%, 20=0.55%, 50=43.69%, 100=52.97%, 250=1.70% 00:26:39.559 cpu : usr=38.85%, sys=0.58%, ctx=1052, majf=0, minf=9 00:26:39.559 IO depths : 1=0.7%, 2=1.5%, 4=8.1%, 8=76.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:39.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.559 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.559 issued rwts: total=2934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.559 filename0: (groupid=0, jobs=1): err= 0: pid=102057: Fri Oct 4 06:44:30 2024 00:26:39.559 read: IOPS=271, BW=1085KiB/s (1111kB/s)(10.6MiB/10027msec) 00:26:39.559 slat (usec): min=3, max=8053, avg=18.75, stdev=189.02 00:26:39.559 clat (msec): min=18, max=138, avg=58.80, stdev=17.82 00:26:39.559 lat (msec): min=18, max=138, avg=58.82, stdev=17.82 00:26:39.559 clat percentiles (msec): 00:26:39.559 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 43], 00:26:39.559 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 61], 00:26:39.560 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 90], 00:26:39.560 | 99.00th=[ 114], 99.50th=[ 116], 99.90th=[ 138], 99.95th=[ 138], 00:26:39.560 | 99.99th=[ 138] 00:26:39.560 bw ( KiB/s): min= 784, max= 1408, per=4.43%, avg=1085.20, stdev=165.31, samples=20 00:26:39.560 iops : min= 196, max= 352, avg=271.30, stdev=41.33, samples=20 00:26:39.560 lat (msec) : 20=0.59%, 50=35.09%, 100=62.41%, 250=1.91% 00:26:39.560 cpu : usr=43.79%, sys=0.73%, ctx=1155, majf=0, minf=9 00:26:39.560 IO depths : 1=1.7%, 2=3.6%, 4=11.6%, 8=71.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename0: (groupid=0, jobs=1): err= 0: pid=102058: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=246, BW=985KiB/s (1008kB/s)(9860KiB/10014msec) 00:26:39.560 slat (usec): min=5, max=8022, avg=24.11, stdev=262.65 00:26:39.560 clat (msec): min=26, max=132, avg=64.82, stdev=18.12 00:26:39.560 lat (msec): min=26, max=132, avg=64.85, stdev=18.13 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 50], 00:26:39.560 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 67], 00:26:39.560 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 97], 00:26:39.560 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 132], 00:26:39.560 | 99.99th=[ 132] 00:26:39.560 bw ( KiB/s): min= 688, max= 1176, per=4.01%, avg=983.53, stdev=121.96, samples=19 00:26:39.560 iops : min= 172, max= 294, avg=245.84, stdev=30.49, samples=19 00:26:39.560 lat (msec) : 50=21.70%, 100=73.83%, 250=4.46% 00:26:39.560 cpu : usr=36.73%, sys=0.62%, ctx=1006, majf=0, minf=9 00:26:39.560 IO depths : 1=1.3%, 2=3.0%, 4=10.5%, 8=72.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename0: (groupid=0, jobs=1): err= 0: pid=102059: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=244, BW=978KiB/s (1001kB/s)(9792KiB/10016msec) 00:26:39.560 slat (usec): min=4, max=8055, avg=17.85, stdev=174.05 00:26:39.560 clat (msec): min=25, max=157, avg=65.31, stdev=19.26 00:26:39.560 lat (msec): min=25, max=157, avg=65.33, stdev=19.25 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 49], 00:26:39.560 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 68], 00:26:39.560 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 96], 00:26:39.560 | 99.00th=[ 124], 99.50th=[ 124], 99.90th=[ 159], 99.95th=[ 159], 00:26:39.560 | 99.99th=[ 159] 00:26:39.560 bw ( KiB/s): min= 640, max= 1376, per=3.96%, avg=969.26, stdev=153.52, samples=19 00:26:39.560 iops : min= 160, max= 344, avg=242.32, stdev=38.38, samples=19 00:26:39.560 lat (msec) : 50=24.31%, 100=71.41%, 250=4.29% 00:26:39.560 cpu : usr=34.38%, sys=0.53%, ctx=942, majf=0, minf=9 00:26:39.560 IO depths : 1=1.5%, 2=3.8%, 4=12.1%, 8=70.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename0: (groupid=0, jobs=1): err= 0: pid=102060: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=235, BW=943KiB/s (965kB/s)(9432KiB/10005msec) 00:26:39.560 slat (usec): min=4, max=8004, avg=20.39, stdev=201.86 00:26:39.560 clat (msec): min=32, max=135, avg=67.73, stdev=17.08 00:26:39.560 lat (msec): min=32, max=135, avg=67.75, stdev=17.08 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:26:39.560 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:26:39.560 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 99], 00:26:39.560 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 136], 99.95th=[ 136], 00:26:39.560 | 99.99th=[ 136] 00:26:39.560 bw ( KiB/s): min= 640, max= 1152, per=3.83%, avg=938.95, stdev=114.30, samples=19 00:26:39.560 iops : min= 160, max= 288, avg=234.74, stdev=28.58, samples=19 00:26:39.560 lat (msec) : 50=13.36%, 100=82.70%, 250=3.94% 00:26:39.560 cpu : usr=39.99%, sys=0.53%, ctx=1122, majf=0, minf=9 00:26:39.560 IO depths : 1=2.8%, 2=6.4%, 4=17.0%, 8=63.4%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=92.0%, 8=3.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename0: (groupid=0, jobs=1): err= 0: pid=102061: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=269, BW=1079KiB/s (1105kB/s)(10.6MiB/10022msec) 00:26:39.560 slat (usec): min=6, max=8038, avg=22.24, stdev=231.97 00:26:39.560 clat (msec): min=24, max=131, avg=59.08, stdev=17.62 00:26:39.560 lat (msec): min=24, max=131, avg=59.10, stdev=17.63 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 45], 00:26:39.560 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 60], 60.00th=[ 61], 00:26:39.560 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:26:39.560 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 132], 99.95th=[ 132], 00:26:39.560 | 99.99th=[ 132] 00:26:39.560 bw ( KiB/s): min= 768, max= 1424, per=4.39%, avg=1075.20, stdev=169.48, samples=20 00:26:39.560 iops : min= 192, max= 356, avg=268.80, stdev=42.37, samples=20 00:26:39.560 lat (msec) : 50=34.17%, 100=63.72%, 250=2.11% 00:26:39.560 cpu : usr=36.07%, sys=0.48%, ctx=985, majf=0, minf=9 00:26:39.560 IO depths : 1=1.3%, 2=3.0%, 4=10.2%, 8=73.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=102062: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=249, BW=996KiB/s (1020kB/s)(9992KiB/10030msec) 00:26:39.560 slat (usec): min=5, max=4421, avg=19.73, stdev=156.69 00:26:39.560 clat (msec): min=25, max=149, avg=64.05, stdev=17.59 00:26:39.560 lat (msec): min=25, max=149, avg=64.07, stdev=17.59 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 50], 00:26:39.560 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 66], 00:26:39.560 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 96], 00:26:39.560 | 99.00th=[ 115], 99.50th=[ 124], 99.90th=[ 150], 99.95th=[ 150], 00:26:39.560 | 99.99th=[ 150] 00:26:39.560 bw ( KiB/s): min= 763, max= 1280, per=4.06%, avg=994.10, stdev=137.90, samples=20 00:26:39.560 iops : min= 190, max= 320, avg=248.45, stdev=34.57, samples=20 00:26:39.560 lat (msec) : 50=21.22%, 100=75.10%, 250=3.68% 00:26:39.560 cpu : usr=42.72%, sys=0.74%, ctx=1267, majf=0, minf=9 00:26:39.560 IO depths : 1=2.0%, 2=4.5%, 4=13.5%, 8=68.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=102063: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=241, BW=966KiB/s (990kB/s)(9668KiB/10004msec) 00:26:39.560 slat (usec): min=3, max=8022, avg=18.83, stdev=193.18 00:26:39.560 clat (msec): min=6, max=153, avg=66.11, stdev=19.60 00:26:39.560 lat (msec): min=6, max=153, avg=66.13, stdev=19.60 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 50], 00:26:39.560 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:26:39.560 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 99], 00:26:39.560 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 155], 99.95th=[ 155], 00:26:39.560 | 99.99th=[ 155] 00:26:39.560 bw ( KiB/s): min= 512, max= 1376, per=3.92%, avg=961.32, stdev=182.82, samples=19 00:26:39.560 iops : min= 128, max= 344, avg=240.32, stdev=45.72, samples=19 00:26:39.560 lat (msec) : 10=0.41%, 50=20.36%, 100=74.89%, 250=4.34% 00:26:39.560 cpu : usr=34.07%, sys=0.49%, ctx=927, majf=0, minf=9 00:26:39.560 IO depths : 1=1.0%, 2=2.4%, 4=9.8%, 8=73.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=90.5%, 8=5.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=102064: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=282, BW=1128KiB/s (1156kB/s)(11.1MiB/10049msec) 00:26:39.560 slat (usec): min=3, max=8031, avg=18.10, stdev=213.00 00:26:39.560 clat (msec): min=2, max=131, avg=56.50, stdev=21.87 00:26:39.560 lat (msec): min=2, max=131, avg=56.52, stdev=21.87 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 3], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 40], 00:26:39.560 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 60], 00:26:39.560 | 70.00th=[ 64], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 96], 00:26:39.560 | 99.00th=[ 120], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 132], 00:26:39.560 | 99.99th=[ 132] 00:26:39.560 bw ( KiB/s): min= 720, max= 1883, per=4.60%, avg=1127.85, stdev=270.34, samples=20 00:26:39.560 iops : min= 180, max= 470, avg=281.80, stdev=67.48, samples=20 00:26:39.560 lat (msec) : 4=1.69%, 10=0.56%, 20=1.13%, 50=40.78%, 100=52.06% 00:26:39.560 lat (msec) : 250=3.77% 00:26:39.560 cpu : usr=36.20%, sys=0.47%, ctx=975, majf=0, minf=9 00:26:39.560 IO depths : 1=0.8%, 2=1.7%, 4=8.6%, 8=76.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=102065: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=263, BW=1054KiB/s (1079kB/s)(10.3MiB/10017msec) 00:26:39.560 slat (usec): min=4, max=8019, avg=22.05, stdev=223.64 00:26:39.560 clat (msec): min=23, max=126, avg=60.60, stdev=16.08 00:26:39.560 lat (msec): min=23, max=126, avg=60.62, stdev=16.09 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 48], 00:26:39.560 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 61], 00:26:39.560 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 92], 00:26:39.560 | 99.00th=[ 115], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:26:39.560 | 99.99th=[ 127] 00:26:39.560 bw ( KiB/s): min= 720, max= 1288, per=4.28%, avg=1049.20, stdev=138.28, samples=20 00:26:39.560 iops : min= 180, max= 322, avg=262.30, stdev=34.57, samples=20 00:26:39.560 lat (msec) : 50=23.46%, 100=74.76%, 250=1.78% 00:26:39.560 cpu : usr=37.07%, sys=0.58%, ctx=1106, majf=0, minf=9 00:26:39.560 IO depths : 1=1.6%, 2=3.4%, 4=11.2%, 8=72.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=102066: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=237, BW=950KiB/s (973kB/s)(9512KiB/10014msec) 00:26:39.560 slat (usec): min=4, max=3029, avg=15.02, stdev=62.55 00:26:39.560 clat (msec): min=23, max=165, avg=67.25, stdev=18.29 00:26:39.560 lat (msec): min=23, max=165, avg=67.27, stdev=18.29 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 53], 00:26:39.560 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:26:39.560 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 96], 00:26:39.560 | 99.00th=[ 127], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:26:39.560 | 99.99th=[ 165] 00:26:39.560 bw ( KiB/s): min= 640, max= 1200, per=3.84%, avg=940.63, stdev=147.57, samples=19 00:26:39.560 iops : min= 160, max= 300, avg=235.16, stdev=36.89, samples=19 00:26:39.560 lat (msec) : 50=15.43%, 100=79.94%, 250=4.63% 00:26:39.560 cpu : usr=37.28%, sys=0.52%, ctx=1119, majf=0, minf=9 00:26:39.560 IO depths : 1=2.0%, 2=4.6%, 4=14.0%, 8=68.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:39.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.560 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.560 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.560 filename1: (groupid=0, jobs=1): err= 0: pid=102067: Fri Oct 4 06:44:30 2024 00:26:39.560 read: IOPS=283, BW=1136KiB/s (1163kB/s)(11.1MiB/10030msec) 00:26:39.560 slat (usec): min=5, max=8043, avg=21.20, stdev=260.72 00:26:39.560 clat (msec): min=10, max=126, avg=56.19, stdev=16.65 00:26:39.560 lat (msec): min=10, max=126, avg=56.21, stdev=16.65 00:26:39.560 clat percentiles (msec): 00:26:39.560 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:26:39.560 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:26:39.560 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 77], 95.00th=[ 85], 00:26:39.560 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 128], 99.95th=[ 128], 00:26:39.560 | 99.99th=[ 128] 00:26:39.560 bw ( KiB/s): min= 896, max= 1376, per=4.63%, avg=1134.60, stdev=154.97, samples=20 00:26:39.560 iops : min= 224, max= 344, avg=283.60, stdev=38.68, samples=20 00:26:39.561 lat (msec) : 20=1.05%, 50=39.64%, 100=57.51%, 250=1.79% 00:26:39.561 cpu : usr=39.73%, sys=0.65%, ctx=1199, majf=0, minf=9 00:26:39.561 IO depths : 1=0.5%, 2=1.2%, 4=7.6%, 8=77.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=89.2%, 8=6.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename1: (groupid=0, jobs=1): err= 0: pid=102068: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=238, BW=955KiB/s (978kB/s)(9568KiB/10017msec) 00:26:39.561 slat (usec): min=4, max=8025, avg=18.37, stdev=183.29 00:26:39.561 clat (msec): min=22, max=140, avg=66.89, stdev=19.12 00:26:39.561 lat (msec): min=22, max=140, avg=66.91, stdev=19.11 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 51], 00:26:39.561 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 69], 00:26:39.561 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 104], 00:26:39.561 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 142], 00:26:39.561 | 99.99th=[ 142] 00:26:39.561 bw ( KiB/s): min= 640, max= 1168, per=3.86%, avg=946.58, stdev=144.18, samples=19 00:26:39.561 iops : min= 160, max= 292, avg=236.63, stdev=36.04, samples=19 00:26:39.561 lat (msec) : 50=18.94%, 100=74.83%, 250=6.23% 00:26:39.561 cpu : usr=36.88%, sys=0.46%, ctx=1076, majf=0, minf=9 00:26:39.561 IO depths : 1=1.1%, 2=3.0%, 4=10.3%, 8=72.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=90.6%, 8=5.6%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename1: (groupid=0, jobs=1): err= 0: pid=102069: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=246, BW=986KiB/s (1010kB/s)(9864KiB/10005msec) 00:26:39.561 slat (usec): min=4, max=8046, avg=21.92, stdev=242.46 00:26:39.561 clat (msec): min=6, max=122, avg=64.78, stdev=17.05 00:26:39.561 lat (msec): min=6, max=122, avg=64.81, stdev=17.04 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 54], 00:26:39.561 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 66], 00:26:39.561 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 89], 95.00th=[ 96], 00:26:39.561 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 123], 99.95th=[ 124], 00:26:39.561 | 99.99th=[ 124] 00:26:39.561 bw ( KiB/s): min= 640, max= 1152, per=3.99%, avg=977.74, stdev=134.75, samples=19 00:26:39.561 iops : min= 160, max= 288, avg=244.42, stdev=33.70, samples=19 00:26:39.561 lat (msec) : 10=0.08%, 50=15.82%, 100=80.78%, 250=3.33% 00:26:39.561 cpu : usr=42.62%, sys=0.62%, ctx=1217, majf=0, minf=9 00:26:39.561 IO depths : 1=1.7%, 2=4.3%, 4=13.3%, 8=68.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=91.3%, 8=4.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102070: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=233, BW=935KiB/s (958kB/s)(9372KiB/10019msec) 00:26:39.561 slat (usec): min=4, max=8026, avg=25.93, stdev=298.68 00:26:39.561 clat (msec): min=19, max=153, avg=68.22, stdev=18.90 00:26:39.561 lat (msec): min=19, max=153, avg=68.24, stdev=18.89 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 56], 00:26:39.561 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 70], 00:26:39.561 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 91], 95.00th=[ 99], 00:26:39.561 | 99.00th=[ 130], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:26:39.561 | 99.99th=[ 155] 00:26:39.561 bw ( KiB/s): min= 768, max= 1232, per=3.78%, avg=925.89, stdev=130.09, samples=19 00:26:39.561 iops : min= 192, max= 308, avg=231.47, stdev=32.52, samples=19 00:26:39.561 lat (msec) : 20=0.09%, 50=15.02%, 100=80.75%, 250=4.14% 00:26:39.561 cpu : usr=37.80%, sys=0.38%, ctx=1106, majf=0, minf=9 00:26:39.561 IO depths : 1=2.2%, 2=5.0%, 4=14.5%, 8=67.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102071: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=290, BW=1161KiB/s (1188kB/s)(11.4MiB/10047msec) 00:26:39.561 slat (usec): min=6, max=7036, avg=18.21, stdev=168.56 00:26:39.561 clat (msec): min=2, max=121, avg=54.93, stdev=19.37 00:26:39.561 lat (msec): min=2, max=121, avg=54.95, stdev=19.37 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 3], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:26:39.561 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 56], 60.00th=[ 60], 00:26:39.561 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 88], 00:26:39.561 | 99.00th=[ 99], 99.50th=[ 107], 99.90th=[ 122], 99.95th=[ 122], 00:26:39.561 | 99.99th=[ 122] 00:26:39.561 bw ( KiB/s): min= 720, max= 2031, per=4.73%, avg=1158.95, stdev=269.45, samples=20 00:26:39.561 iops : min= 180, max= 507, avg=289.55, stdev=67.24, samples=20 00:26:39.561 lat (msec) : 4=1.65%, 10=1.65%, 20=0.55%, 50=38.49%, 100=56.71% 00:26:39.561 lat (msec) : 250=0.96% 00:26:39.561 cpu : usr=41.90%, sys=0.70%, ctx=1354, majf=0, minf=9 00:26:39.561 IO depths : 1=1.6%, 2=3.5%, 4=11.1%, 8=72.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102072: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=245, BW=983KiB/s (1006kB/s)(9864KiB/10036msec) 00:26:39.561 slat (nsec): min=5086, max=74760, avg=12750.72, stdev=8146.37 00:26:39.561 clat (msec): min=7, max=131, avg=64.96, stdev=18.87 00:26:39.561 lat (msec): min=7, max=131, avg=64.98, stdev=18.87 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 51], 00:26:39.561 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:26:39.561 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 96], 00:26:39.561 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:26:39.561 | 99.99th=[ 132] 00:26:39.561 bw ( KiB/s): min= 688, max= 1408, per=4.00%, avg=981.60, stdev=161.40, samples=20 00:26:39.561 iops : min= 172, max= 352, avg=245.35, stdev=40.31, samples=20 00:26:39.561 lat (msec) : 10=1.30%, 20=0.65%, 50=17.72%, 100=76.68%, 250=3.65% 00:26:39.561 cpu : usr=36.35%, sys=0.49%, ctx=1110, majf=0, minf=9 00:26:39.561 IO depths : 1=1.7%, 2=3.5%, 4=11.2%, 8=71.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102073: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=271, BW=1084KiB/s (1110kB/s)(10.6MiB/10029msec) 00:26:39.561 slat (usec): min=4, max=4048, avg=15.30, stdev=87.01 00:26:39.561 clat (msec): min=25, max=137, avg=58.83, stdev=18.25 00:26:39.561 lat (msec): min=25, max=137, avg=58.85, stdev=18.24 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 00:26:39.561 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:26:39.561 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 85], 95.00th=[ 94], 00:26:39.561 | 99.00th=[ 111], 99.50th=[ 123], 99.90th=[ 138], 99.95th=[ 138], 00:26:39.561 | 99.99th=[ 138] 00:26:39.561 bw ( KiB/s): min= 640, max= 1376, per=4.42%, avg=1084.45, stdev=188.18, samples=20 00:26:39.561 iops : min= 160, max= 344, avg=271.10, stdev=47.05, samples=20 00:26:39.561 lat (msec) : 50=35.17%, 100=61.18%, 250=3.64% 00:26:39.561 cpu : usr=42.99%, sys=0.68%, ctx=1635, majf=0, minf=9 00:26:39.561 IO depths : 1=1.4%, 2=3.2%, 4=10.7%, 8=72.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102074: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=245, BW=983KiB/s (1007kB/s)(9848KiB/10016msec) 00:26:39.561 slat (usec): min=4, max=8001, avg=21.52, stdev=241.53 00:26:39.561 clat (msec): min=25, max=156, avg=64.95, stdev=18.46 00:26:39.561 lat (msec): min=25, max=156, avg=64.97, stdev=18.46 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:26:39.561 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 70], 00:26:39.561 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 95], 00:26:39.561 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 157], 99.95th=[ 157], 00:26:39.561 | 99.99th=[ 157] 00:26:39.561 bw ( KiB/s): min= 728, max= 1456, per=3.99%, avg=978.21, stdev=168.51, samples=19 00:26:39.561 iops : min= 182, max= 364, avg=244.53, stdev=42.12, samples=19 00:26:39.561 lat (msec) : 50=23.80%, 100=73.68%, 250=2.52% 00:26:39.561 cpu : usr=36.64%, sys=0.61%, ctx=989, majf=0, minf=9 00:26:39.561 IO depths : 1=1.7%, 2=3.8%, 4=12.2%, 8=70.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102075: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=257, BW=1029KiB/s (1053kB/s)(10.1MiB/10026msec) 00:26:39.561 slat (usec): min=6, max=8030, avg=21.88, stdev=244.43 00:26:39.561 clat (msec): min=25, max=137, avg=61.98, stdev=19.49 00:26:39.561 lat (msec): min=25, max=137, avg=62.01, stdev=19.49 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 45], 00:26:39.561 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 63], 00:26:39.561 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 93], 95.00th=[ 101], 00:26:39.561 | 99.00th=[ 118], 99.50th=[ 132], 99.90th=[ 134], 99.95th=[ 134], 00:26:39.561 | 99.99th=[ 138] 00:26:39.561 bw ( KiB/s): min= 680, max= 1328, per=4.18%, avg=1024.80, stdev=191.91, samples=20 00:26:39.561 iops : min= 170, max= 332, avg=256.20, stdev=47.98, samples=20 00:26:39.561 lat (msec) : 50=29.29%, 100=65.63%, 250=5.08% 00:26:39.561 cpu : usr=39.52%, sys=0.51%, ctx=1124, majf=0, minf=9 00:26:39.561 IO depths : 1=1.0%, 2=2.4%, 4=11.2%, 8=73.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102076: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=239, BW=957KiB/s (980kB/s)(9592KiB/10021msec) 00:26:39.561 slat (usec): min=4, max=8029, avg=20.16, stdev=200.86 00:26:39.561 clat (msec): min=22, max=145, avg=66.70, stdev=17.58 00:26:39.561 lat (msec): min=22, max=145, avg=66.72, stdev=17.58 00:26:39.561 clat percentiles (msec): 00:26:39.561 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 52], 00:26:39.561 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 68], 00:26:39.561 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 97], 00:26:39.561 | 99.00th=[ 118], 99.50th=[ 126], 99.90th=[ 146], 99.95th=[ 146], 00:26:39.561 | 99.99th=[ 146] 00:26:39.561 bw ( KiB/s): min= 640, max= 1152, per=3.90%, avg=955.20, stdev=132.36, samples=20 00:26:39.561 iops : min= 160, max= 288, avg=238.80, stdev=33.09, samples=20 00:26:39.561 lat (msec) : 50=17.22%, 100=78.52%, 250=4.25% 00:26:39.561 cpu : usr=38.74%, sys=0.45%, ctx=1124, majf=0, minf=9 00:26:39.561 IO depths : 1=1.7%, 2=3.8%, 4=12.5%, 8=69.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=90.8%, 8=4.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.561 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.561 filename2: (groupid=0, jobs=1): err= 0: pid=102077: Fri Oct 4 06:44:30 2024 00:26:39.561 read: IOPS=244, BW=976KiB/s (1000kB/s)(9764KiB/10001msec) 00:26:39.561 slat (usec): min=3, max=8038, avg=26.70, stdev=324.41 00:26:39.561 clat (usec): min=1185, max=149006, avg=65394.19, stdev=22696.74 00:26:39.561 lat (usec): min=1192, max=149014, avg=65420.89, stdev=22702.01 00:26:39.561 clat percentiles (usec): 00:26:39.561 | 1.00th=[ 1434], 5.00th=[ 20055], 10.00th=[ 44827], 20.00th=[ 54264], 00:26:39.561 | 30.00th=[ 58459], 40.00th=[ 60031], 50.00th=[ 62653], 60.00th=[ 68682], 00:26:39.561 | 70.00th=[ 73925], 80.00th=[ 81265], 90.00th=[ 92799], 95.00th=[104334], 00:26:39.561 | 99.00th=[125305], 99.50th=[125305], 99.90th=[149947], 99.95th=[149947], 00:26:39.561 | 99.99th=[149947] 00:26:39.561 bw ( KiB/s): min= 688, max= 1040, per=3.75%, avg=920.00, stdev=96.11, samples=19 00:26:39.561 iops : min= 172, max= 260, avg=230.00, stdev=24.03, samples=19 00:26:39.561 lat (msec) : 2=3.73%, 4=0.20%, 10=0.90%, 50=10.32%, 100=78.74% 00:26:39.561 lat (msec) : 250=6.10% 00:26:39.561 cpu : usr=33.38%, sys=0.45%, ctx=937, majf=0, minf=9 00:26:39.561 IO depths : 1=1.7%, 2=4.2%, 4=12.9%, 8=69.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:39.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 complete : 0=0.0%, 4=91.0%, 8=4.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:39.561 issued rwts: total=2441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:39.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:39.562 00:26:39.562 Run status group 0 (all jobs): 00:26:39.562 READ: bw=23.9MiB/s (25.1MB/s), 935KiB/s-1169KiB/s (958kB/s-1197kB/s), io=240MiB (252MB), run=10001-10049msec 00:26:39.562 06:44:30 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:39.562 06:44:30 -- target/dif.sh@43 -- # local sub 00:26:39.562 06:44:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:39.562 06:44:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:39.562 06:44:30 -- target/dif.sh@36 -- # local sub_id=0 00:26:39.562 06:44:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:39.562 06:44:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:39.562 06:44:30 -- target/dif.sh@36 -- # local sub_id=1 00:26:39.562 06:44:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:39.562 06:44:30 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:39.562 06:44:30 -- target/dif.sh@36 -- # local sub_id=2 00:26:39.562 06:44:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:39.562 06:44:30 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:39.562 06:44:30 -- target/dif.sh@115 -- # numjobs=2 00:26:39.562 06:44:30 -- target/dif.sh@115 -- # iodepth=8 00:26:39.562 06:44:30 -- target/dif.sh@115 -- # runtime=5 00:26:39.562 06:44:30 -- target/dif.sh@115 -- # files=1 00:26:39.562 06:44:30 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:39.562 06:44:30 -- target/dif.sh@28 -- # local sub 00:26:39.562 06:44:30 -- target/dif.sh@30 -- # for sub in "$@" 00:26:39.562 06:44:30 -- target/dif.sh@31 -- # create_subsystem 0 00:26:39.562 06:44:30 -- target/dif.sh@18 -- # local sub_id=0 00:26:39.562 06:44:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 bdev_null0 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 [2024-10-04 06:44:30.509625] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@30 -- # for sub in "$@" 00:26:39.562 06:44:30 -- target/dif.sh@31 -- # create_subsystem 1 00:26:39.562 06:44:30 -- target/dif.sh@18 -- # local sub_id=1 00:26:39.562 06:44:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 bdev_null1 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.562 06:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:39.562 06:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:39.562 06:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:39.562 06:44:30 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:39.562 06:44:30 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:39.562 06:44:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:39.562 06:44:30 -- nvmf/common.sh@520 -- # config=() 00:26:39.562 06:44:30 -- nvmf/common.sh@520 -- # local subsystem config 00:26:39.562 06:44:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:39.562 06:44:30 -- target/dif.sh@82 -- # gen_fio_conf 00:26:39.562 06:44:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.562 06:44:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:39.562 { 00:26:39.562 "params": { 00:26:39.562 "name": "Nvme$subsystem", 00:26:39.562 "trtype": "$TEST_TRANSPORT", 00:26:39.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.562 "adrfam": "ipv4", 00:26:39.562 "trsvcid": "$NVMF_PORT", 00:26:39.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.562 "hdgst": ${hdgst:-false}, 00:26:39.562 "ddgst": ${ddgst:-false} 00:26:39.562 }, 00:26:39.562 "method": "bdev_nvme_attach_controller" 00:26:39.562 } 00:26:39.562 EOF 00:26:39.562 )") 00:26:39.562 06:44:30 -- target/dif.sh@54 -- # local file 00:26:39.562 06:44:30 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.562 06:44:30 -- target/dif.sh@56 -- # cat 00:26:39.562 06:44:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:39.562 06:44:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:39.562 06:44:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:39.562 06:44:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:39.562 06:44:30 -- common/autotest_common.sh@1320 -- # shift 00:26:39.562 06:44:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:39.562 06:44:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:39.562 06:44:30 -- nvmf/common.sh@542 -- # cat 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:39.562 06:44:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:39.562 06:44:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:39.562 { 00:26:39.562 "params": { 00:26:39.562 "name": "Nvme$subsystem", 00:26:39.562 "trtype": "$TEST_TRANSPORT", 00:26:39.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.562 "adrfam": "ipv4", 00:26:39.562 "trsvcid": "$NVMF_PORT", 00:26:39.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.562 "hdgst": ${hdgst:-false}, 00:26:39.562 "ddgst": ${ddgst:-false} 00:26:39.562 }, 00:26:39.562 "method": "bdev_nvme_attach_controller" 00:26:39.562 } 00:26:39.562 EOF 00:26:39.562 )") 00:26:39.562 06:44:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:39.562 06:44:30 -- target/dif.sh@72 -- # (( file <= files )) 00:26:39.562 06:44:30 -- target/dif.sh@73 -- # cat 00:26:39.562 06:44:30 -- nvmf/common.sh@542 -- # cat 00:26:39.562 06:44:30 -- target/dif.sh@72 -- # (( file++ )) 00:26:39.562 06:44:30 -- target/dif.sh@72 -- # (( file <= files )) 00:26:39.562 06:44:30 -- nvmf/common.sh@544 -- # jq . 00:26:39.562 06:44:30 -- nvmf/common.sh@545 -- # IFS=, 00:26:39.562 06:44:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:39.562 "params": { 00:26:39.562 "name": "Nvme0", 00:26:39.562 "trtype": "tcp", 00:26:39.562 "traddr": "10.0.0.2", 00:26:39.562 "adrfam": "ipv4", 00:26:39.562 "trsvcid": "4420", 00:26:39.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:39.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:39.562 "hdgst": false, 00:26:39.562 "ddgst": false 00:26:39.562 }, 00:26:39.562 "method": "bdev_nvme_attach_controller" 00:26:39.562 },{ 00:26:39.562 "params": { 00:26:39.562 "name": "Nvme1", 00:26:39.562 "trtype": "tcp", 00:26:39.562 "traddr": "10.0.0.2", 00:26:39.562 "adrfam": "ipv4", 00:26:39.562 "trsvcid": "4420", 00:26:39.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.562 "hdgst": false, 00:26:39.562 "ddgst": false 00:26:39.562 }, 00:26:39.562 "method": "bdev_nvme_attach_controller" 00:26:39.562 }' 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:39.562 06:44:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:39.562 06:44:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:39.562 06:44:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:39.562 06:44:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:39.562 06:44:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:39.562 06:44:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:39.562 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:39.562 ... 00:26:39.562 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:39.562 ... 00:26:39.562 fio-3.35 00:26:39.562 Starting 4 threads 00:26:39.562 [2024-10-04 06:44:31.276298] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:39.562 [2024-10-04 06:44:31.276371] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:43.747 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=102214: Fri Oct 4 06:44:36 2024 00:26:43.747 read: IOPS=2178, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5001msec) 00:26:43.747 slat (usec): min=6, max=103, avg=25.49, stdev=12.98 00:26:43.747 clat (usec): min=1454, max=5534, avg=3542.25, stdev=183.88 00:26:43.747 lat (usec): min=1472, max=5554, avg=3567.74, stdev=186.66 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[ 3195], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3425], 00:26:43.747 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:26:43.747 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3752], 95.00th=[ 3851], 00:26:43.747 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4359], 99.95th=[ 4948], 00:26:43.747 | 99.99th=[ 5211] 00:26:43.747 bw ( KiB/s): min=16896, max=18597, per=24.99%, avg=17424.50, stdev=494.12, samples=10 00:26:43.747 iops : min= 2112, max= 2324, avg=2178.00, stdev=61.60, samples=10 00:26:43.747 lat (msec) : 2=0.03%, 4=98.15%, 10=1.83% 00:26:43.747 cpu : usr=94.58%, sys=4.10%, ctx=64, majf=0, minf=9 00:26:43.747 IO depths : 1=11.3%, 2=25.0%, 4=50.0%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=10896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=102215: Fri Oct 4 06:44:36 2024 00:26:43.747 read: IOPS=2182, BW=17.1MiB/s (17.9MB/s)(85.3MiB/5003msec) 00:26:43.747 slat (nsec): min=5855, max=67320, avg=8797.46, stdev=4897.29 00:26:43.747 clat (usec): min=936, max=6037, avg=3620.39, stdev=219.62 00:26:43.747 lat (usec): min=944, max=6063, avg=3629.19, stdev=219.18 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3490], 00:26:43.747 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3654], 00:26:43.747 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3949], 00:26:43.747 | 99.00th=[ 4178], 99.50th=[ 4228], 99.90th=[ 4424], 99.95th=[ 5735], 00:26:43.747 | 99.99th=[ 5800] 00:26:43.747 bw ( KiB/s): min=16896, max=18672, per=25.05%, avg=17459.20, stdev=537.04, samples=10 00:26:43.747 iops : min= 2112, max= 2334, avg=2182.40, stdev=67.13, samples=10 00:26:43.747 lat (usec) : 1000=0.05% 00:26:43.747 lat (msec) : 2=0.16%, 4=96.42%, 10=3.36% 00:26:43.747 cpu : usr=94.50%, sys=4.32%, ctx=8, majf=0, minf=9 00:26:43.747 IO depths : 1=11.2%, 2=24.7%, 4=50.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=10920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:43.747 filename1: (groupid=0, jobs=1): err= 0: pid=102216: Fri Oct 4 06:44:36 2024 00:26:43.747 read: IOPS=2178, BW=17.0MiB/s (17.8MB/s)(85.1MiB/5002msec) 00:26:43.747 slat (usec): min=4, max=191, avg=24.68, stdev=11.63 00:26:43.747 clat (usec): min=2516, max=4929, avg=3564.56, stdev=188.40 00:26:43.747 lat (usec): min=2535, max=4951, avg=3589.23, stdev=188.99 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[ 3163], 5.00th=[ 3261], 10.00th=[ 3326], 20.00th=[ 3425], 00:26:43.747 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:43.747 | 70.00th=[ 3654], 80.00th=[ 3687], 90.00th=[ 3785], 95.00th=[ 3884], 00:26:43.747 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4424], 99.95th=[ 4686], 00:26:43.747 | 99.99th=[ 4948] 00:26:43.747 bw ( KiB/s): min=16896, max=18560, per=24.99%, avg=17420.80, stdev=484.41, samples=10 00:26:43.747 iops : min= 2112, max= 2320, avg=2177.60, stdev=60.55, samples=10 00:26:43.747 lat (msec) : 4=97.66%, 10=2.34% 00:26:43.747 cpu : usr=94.08%, sys=4.16%, ctx=589, majf=0, minf=10 00:26:43.747 IO depths : 1=8.5%, 2=25.0%, 4=50.0%, 8=16.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=10896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:43.747 filename1: (groupid=0, jobs=1): err= 0: pid=102217: Fri Oct 4 06:44:36 2024 00:26:43.747 read: IOPS=2176, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5001msec) 00:26:43.747 slat (usec): min=6, max=103, avg=22.53, stdev=13.30 00:26:43.747 clat (usec): min=937, max=6299, avg=3601.21, stdev=270.52 00:26:43.747 lat (usec): min=945, max=6324, avg=3623.74, stdev=270.70 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[ 2999], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3458], 00:26:43.747 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:26:43.747 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3851], 95.00th=[ 4047], 00:26:43.748 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5538], 99.95th=[ 6128], 00:26:43.748 | 99.99th=[ 6128] 00:26:43.748 bw ( KiB/s): min=17104, max=18480, per=25.06%, avg=17470.22, stdev=444.73, samples=9 00:26:43.748 iops : min= 2138, max= 2310, avg=2183.78, stdev=55.59, samples=9 00:26:43.748 lat (usec) : 1000=0.05% 00:26:43.748 lat (msec) : 2=0.04%, 4=94.22%, 10=5.70% 00:26:43.748 cpu : usr=94.58%, sys=3.84%, ctx=6, majf=0, minf=9 00:26:43.748 IO depths : 1=1.4%, 2=6.6%, 4=68.3%, 8=23.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=10883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:43.748 00:26:43.748 Run status group 0 (all jobs): 00:26:43.748 READ: bw=68.1MiB/s (71.4MB/s), 17.0MiB/s-17.1MiB/s (17.8MB/s-17.9MB/s), io=341MiB (357MB), run=5001-5003msec 00:26:44.006 06:44:36 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:44.006 06:44:36 -- target/dif.sh@43 -- # local sub 00:26:44.006 06:44:36 -- target/dif.sh@45 -- # for sub in "$@" 00:26:44.006 06:44:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:44.006 06:44:36 -- target/dif.sh@36 -- # local sub_id=0 00:26:44.006 06:44:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:44.006 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.006 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 06:44:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:44.264 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 06:44:36 -- target/dif.sh@45 -- # for sub in "$@" 00:26:44.264 06:44:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:44.264 06:44:36 -- target/dif.sh@36 -- # local sub_id=1 00:26:44.264 06:44:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.264 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 06:44:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:44.264 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 00:26:44.264 real 0m23.777s 00:26:44.264 user 2m7.862s 00:26:44.264 sys 0m3.689s 00:26:44.264 06:44:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.264 ************************************ 00:26:44.264 END TEST fio_dif_rand_params 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 ************************************ 00:26:44.264 06:44:36 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:44.264 06:44:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:44.264 06:44:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 ************************************ 00:26:44.264 START TEST fio_dif_digest 00:26:44.264 ************************************ 00:26:44.264 06:44:36 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:26:44.264 06:44:36 -- target/dif.sh@123 -- # local NULL_DIF 00:26:44.264 06:44:36 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:44.264 06:44:36 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:44.264 06:44:36 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:44.264 06:44:36 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:44.264 06:44:36 -- target/dif.sh@127 -- # numjobs=3 00:26:44.264 06:44:36 -- target/dif.sh@127 -- # iodepth=3 00:26:44.264 06:44:36 -- target/dif.sh@127 -- # runtime=10 00:26:44.264 06:44:36 -- target/dif.sh@128 -- # hdgst=true 00:26:44.264 06:44:36 -- target/dif.sh@128 -- # ddgst=true 00:26:44.264 06:44:36 -- target/dif.sh@130 -- # create_subsystems 0 00:26:44.264 06:44:36 -- target/dif.sh@28 -- # local sub 00:26:44.264 06:44:36 -- target/dif.sh@30 -- # for sub in "$@" 00:26:44.264 06:44:36 -- target/dif.sh@31 -- # create_subsystem 0 00:26:44.264 06:44:36 -- target/dif.sh@18 -- # local sub_id=0 00:26:44.264 06:44:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:44.264 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 bdev_null0 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 06:44:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:44.264 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 06:44:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:44.264 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 06:44:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:44.264 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:44.264 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:26:44.264 [2024-10-04 06:44:36.818244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.264 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:44.264 06:44:36 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:44.264 06:44:36 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:44.264 06:44:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:44.264 06:44:36 -- nvmf/common.sh@520 -- # config=() 00:26:44.264 06:44:36 -- nvmf/common.sh@520 -- # local subsystem config 00:26:44.264 06:44:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:44.264 06:44:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:44.264 { 00:26:44.264 "params": { 00:26:44.264 "name": "Nvme$subsystem", 00:26:44.264 "trtype": "$TEST_TRANSPORT", 00:26:44.264 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.264 "adrfam": "ipv4", 00:26:44.264 "trsvcid": "$NVMF_PORT", 00:26:44.264 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.264 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.264 "hdgst": ${hdgst:-false}, 00:26:44.264 "ddgst": ${ddgst:-false} 00:26:44.264 }, 00:26:44.264 "method": "bdev_nvme_attach_controller" 00:26:44.264 } 00:26:44.264 EOF 00:26:44.264 )") 00:26:44.264 06:44:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.264 06:44:36 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.264 06:44:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:44.264 06:44:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:44.264 06:44:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:44.264 06:44:36 -- target/dif.sh@82 -- # gen_fio_conf 00:26:44.265 06:44:36 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.265 06:44:36 -- nvmf/common.sh@542 -- # cat 00:26:44.265 06:44:36 -- common/autotest_common.sh@1320 -- # shift 00:26:44.265 06:44:36 -- target/dif.sh@54 -- # local file 00:26:44.265 06:44:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:44.265 06:44:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.265 06:44:36 -- target/dif.sh@56 -- # cat 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:44.265 06:44:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:44.265 06:44:36 -- target/dif.sh@72 -- # (( file <= files )) 00:26:44.265 06:44:36 -- nvmf/common.sh@544 -- # jq . 00:26:44.265 06:44:36 -- nvmf/common.sh@545 -- # IFS=, 00:26:44.265 06:44:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:44.265 "params": { 00:26:44.265 "name": "Nvme0", 00:26:44.265 "trtype": "tcp", 00:26:44.265 "traddr": "10.0.0.2", 00:26:44.265 "adrfam": "ipv4", 00:26:44.265 "trsvcid": "4420", 00:26:44.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:44.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:44.265 "hdgst": true, 00:26:44.265 "ddgst": true 00:26:44.265 }, 00:26:44.265 "method": "bdev_nvme_attach_controller" 00:26:44.265 }' 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:44.265 06:44:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:44.265 06:44:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:44.265 06:44:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:44.265 06:44:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:44.265 06:44:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:44.265 06:44:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.522 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:44.522 ... 00:26:44.522 fio-3.35 00:26:44.522 Starting 3 threads 00:26:44.780 [2024-10-04 06:44:37.440194] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:44.780 [2024-10-04 06:44:37.440285] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:56.977 00:26:56.977 filename0: (groupid=0, jobs=1): err= 0: pid=102323: Fri Oct 4 06:44:47 2024 00:26:56.977 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(262MiB/10043msec) 00:26:56.977 slat (nsec): min=6635, max=72621, avg=16480.08, stdev=7076.80 00:26:56.977 clat (usec): min=8719, max=47599, avg=14361.95, stdev=2462.14 00:26:56.977 lat (usec): min=8738, max=47617, avg=14378.43, stdev=2461.70 00:26:56.977 clat percentiles (usec): 00:26:56.977 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[13173], 00:26:56.977 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:26:56.977 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:26:56.977 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[46400], 00:26:56.977 | 99.99th=[47449] 00:26:56.977 bw ( KiB/s): min=24832, max=29696, per=30.97%, avg=26704.84, stdev=1316.60, samples=19 00:26:56.977 iops : min= 194, max= 232, avg=208.63, stdev=10.29, samples=19 00:26:56.977 lat (msec) : 10=11.90%, 20=88.00%, 50=0.10% 00:26:56.977 cpu : usr=94.93%, sys=3.66%, ctx=16, majf=0, minf=11 00:26:56.977 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:56.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.977 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.977 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:56.977 filename0: (groupid=0, jobs=1): err= 0: pid=102324: Fri Oct 4 06:44:47 2024 00:26:56.977 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(291MiB/10010msec) 00:26:56.977 slat (nsec): min=7026, max=69685, avg=16890.67, stdev=6579.10 00:26:56.978 clat (usec): min=8105, max=53903, avg=12872.67, stdev=8110.24 00:26:56.978 lat (usec): min=8135, max=53917, avg=12889.57, stdev=8110.25 00:26:56.978 clat percentiles (usec): 00:26:56.978 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:26:56.978 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:26:56.978 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[13173], 00:26:56.978 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:26:56.978 | 99.99th=[53740] 00:26:56.978 bw ( KiB/s): min=23040, max=34560, per=34.79%, avg=30005.89, stdev=3428.85, samples=19 00:26:56.978 iops : min= 180, max= 270, avg=234.42, stdev=26.79, samples=19 00:26:56.978 lat (msec) : 10=7.43%, 20=88.45%, 50=0.13%, 100=3.99% 00:26:56.978 cpu : usr=92.63%, sys=5.52%, ctx=10, majf=0, minf=9 00:26:56.978 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:56.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.978 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.978 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:56.978 filename0: (groupid=0, jobs=1): err= 0: pid=102325: Fri Oct 4 06:44:47 2024 00:26:56.978 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10004msec) 00:26:56.978 slat (usec): min=6, max=101, avg=19.78, stdev= 8.30 00:26:56.978 clat (usec): min=5994, max=17507, avg=12773.61, stdev=2256.34 00:26:56.978 lat (usec): min=6002, max=17531, avg=12793.39, stdev=2256.79 00:26:56.978 clat percentiles (usec): 00:26:56.978 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[11207], 00:26:56.978 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:26:56.978 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:26:56.978 | 99.00th=[16188], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:26:56.978 | 99.99th=[17433] 00:26:56.978 bw ( KiB/s): min=27648, max=33280, per=34.65%, avg=29881.63, stdev=1709.10, samples=19 00:26:56.978 iops : min= 216, max= 260, avg=233.42, stdev=13.38, samples=19 00:26:56.978 lat (msec) : 10=17.61%, 20=82.39% 00:26:56.978 cpu : usr=93.05%, sys=4.93%, ctx=108, majf=0, minf=9 00:26:56.978 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:56.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:56.978 issued rwts: total=2345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:56.978 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:56.978 00:26:56.978 Run status group 0 (all jobs): 00:26:56.978 READ: bw=84.2MiB/s (88.3MB/s), 26.0MiB/s-29.3MiB/s (27.3MB/s-30.7MB/s), io=846MiB (887MB), run=10004-10043msec 00:26:56.978 06:44:47 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:56.978 06:44:47 -- target/dif.sh@43 -- # local sub 00:26:56.978 06:44:47 -- target/dif.sh@45 -- # for sub in "$@" 00:26:56.978 06:44:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:56.978 06:44:47 -- target/dif.sh@36 -- # local sub_id=0 00:26:56.978 06:44:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:56.978 06:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.978 06:44:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.978 06:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.978 06:44:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:56.978 06:44:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:56.978 06:44:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.978 06:44:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:56.978 00:26:56.978 real 0m11.125s 00:26:56.978 user 0m28.855s 00:26:56.978 sys 0m1.721s 00:26:56.978 06:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.978 ************************************ 00:26:56.978 06:44:47 -- common/autotest_common.sh@10 -- # set +x 00:26:56.978 END TEST fio_dif_digest 00:26:56.978 ************************************ 00:26:56.978 06:44:47 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:56.978 06:44:47 -- target/dif.sh@147 -- # nvmftestfini 00:26:56.978 06:44:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:56.978 06:44:47 -- nvmf/common.sh@116 -- # sync 00:26:56.978 06:44:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:56.978 06:44:48 -- nvmf/common.sh@119 -- # set +e 00:26:56.978 06:44:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:56.978 06:44:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:56.978 rmmod nvme_tcp 00:26:56.978 rmmod nvme_fabrics 00:26:56.978 rmmod nvme_keyring 00:26:56.978 06:44:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:56.978 06:44:48 -- nvmf/common.sh@123 -- # set -e 00:26:56.978 06:44:48 -- nvmf/common.sh@124 -- # return 0 00:26:56.978 06:44:48 -- nvmf/common.sh@477 -- # '[' -n 101555 ']' 00:26:56.978 06:44:48 -- nvmf/common.sh@478 -- # killprocess 101555 00:26:56.978 06:44:48 -- common/autotest_common.sh@926 -- # '[' -z 101555 ']' 00:26:56.978 06:44:48 -- common/autotest_common.sh@930 -- # kill -0 101555 00:26:56.978 06:44:48 -- common/autotest_common.sh@931 -- # uname 00:26:56.978 06:44:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:56.978 06:44:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101555 00:26:56.978 06:44:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:56.978 06:44:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:56.978 killing process with pid 101555 00:26:56.978 06:44:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101555' 00:26:56.978 06:44:48 -- common/autotest_common.sh@945 -- # kill 101555 00:26:56.978 06:44:48 -- common/autotest_common.sh@950 -- # wait 101555 00:26:56.978 06:44:48 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:56.978 06:44:48 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:56.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:56.978 Waiting for block devices as requested 00:26:56.978 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:56.978 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:56.978 06:44:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:56.978 06:44:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:56.978 06:44:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.978 06:44:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:56.978 06:44:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.978 06:44:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:56.978 06:44:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.978 06:44:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:56.978 00:26:56.978 real 1m0.514s 00:26:56.978 user 3m54.022s 00:26:56.978 sys 0m13.354s 00:26:56.978 06:44:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.978 06:44:48 -- common/autotest_common.sh@10 -- # set +x 00:26:56.978 ************************************ 00:26:56.978 END TEST nvmf_dif 00:26:56.978 ************************************ 00:26:56.978 06:44:49 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:56.978 06:44:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:56.978 06:44:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:56.978 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:26:56.978 ************************************ 00:26:56.978 START TEST nvmf_abort_qd_sizes 00:26:56.978 ************************************ 00:26:56.978 06:44:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:56.978 * Looking for test storage... 00:26:56.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:56.978 06:44:49 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:56.978 06:44:49 -- nvmf/common.sh@7 -- # uname -s 00:26:56.978 06:44:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.978 06:44:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.978 06:44:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.978 06:44:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.978 06:44:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.978 06:44:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.978 06:44:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.978 06:44:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.978 06:44:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.978 06:44:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.978 06:44:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:26:56.978 06:44:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c 00:26:56.978 06:44:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.978 06:44:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.978 06:44:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:56.978 06:44:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:56.978 06:44:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.978 06:44:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.978 06:44:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.978 06:44:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.978 06:44:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.978 06:44:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.978 06:44:49 -- paths/export.sh@5 -- # export PATH 00:26:56.978 06:44:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.978 06:44:49 -- nvmf/common.sh@46 -- # : 0 00:26:56.979 06:44:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:56.979 06:44:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:56.979 06:44:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:56.979 06:44:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.979 06:44:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.979 06:44:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:56.979 06:44:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:56.979 06:44:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:56.979 06:44:49 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:56.979 06:44:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:56.979 06:44:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.979 06:44:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:56.979 06:44:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:56.979 06:44:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:56.979 06:44:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.979 06:44:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:56.979 06:44:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.979 06:44:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:56.979 06:44:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:56.979 06:44:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:56.979 06:44:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:56.979 06:44:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:56.979 06:44:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:56.979 06:44:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.979 06:44:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.979 06:44:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:56.979 06:44:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:56.979 06:44:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:56.979 06:44:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:56.979 06:44:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:56.979 06:44:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.979 06:44:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:56.979 06:44:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:56.979 06:44:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:56.979 06:44:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:56.979 06:44:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:56.979 06:44:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:56.979 Cannot find device "nvmf_tgt_br" 00:26:56.979 06:44:49 -- nvmf/common.sh@154 -- # true 00:26:56.979 06:44:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:56.979 Cannot find device "nvmf_tgt_br2" 00:26:56.979 06:44:49 -- nvmf/common.sh@155 -- # true 00:26:56.979 06:44:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:56.979 06:44:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:56.979 Cannot find device "nvmf_tgt_br" 00:26:56.979 06:44:49 -- nvmf/common.sh@157 -- # true 00:26:56.979 06:44:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:56.979 Cannot find device "nvmf_tgt_br2" 00:26:56.979 06:44:49 -- nvmf/common.sh@158 -- # true 00:26:56.979 06:44:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:56.979 06:44:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:56.979 06:44:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:56.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:56.979 06:44:49 -- nvmf/common.sh@161 -- # true 00:26:56.979 06:44:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:56.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:56.979 06:44:49 -- nvmf/common.sh@162 -- # true 00:26:56.979 06:44:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:56.979 06:44:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:56.979 06:44:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:56.979 06:44:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:56.979 06:44:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:56.979 06:44:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:56.979 06:44:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:56.979 06:44:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:56.979 06:44:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:56.979 06:44:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:56.979 06:44:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:56.979 06:44:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:56.979 06:44:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:56.979 06:44:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:56.979 06:44:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:56.979 06:44:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:56.979 06:44:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:56.979 06:44:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:56.979 06:44:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:56.979 06:44:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:56.979 06:44:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:56.979 06:44:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:56.979 06:44:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:56.979 06:44:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:56.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:26:56.979 00:26:56.979 --- 10.0.0.2 ping statistics --- 00:26:56.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.979 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:56.979 06:44:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:56.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:56.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:26:56.979 00:26:56.979 --- 10.0.0.3 ping statistics --- 00:26:56.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.979 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:26:56.979 06:44:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:56.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:26:56.979 00:26:56.979 --- 10.0.0.1 ping statistics --- 00:26:56.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.979 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:56.979 06:44:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.979 06:44:49 -- nvmf/common.sh@421 -- # return 0 00:26:56.979 06:44:49 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:56.979 06:44:49 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:57.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:57.803 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:57.803 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:57.803 06:44:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.803 06:44:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:57.803 06:44:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:57.803 06:44:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.803 06:44:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:57.803 06:44:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:57.803 06:44:50 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:57.803 06:44:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:57.803 06:44:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:57.803 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:57.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.803 06:44:50 -- nvmf/common.sh@469 -- # nvmfpid=102918 00:26:57.803 06:44:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:57.803 06:44:50 -- nvmf/common.sh@470 -- # waitforlisten 102918 00:26:57.803 06:44:50 -- common/autotest_common.sh@819 -- # '[' -z 102918 ']' 00:26:57.803 06:44:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.803 06:44:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:57.803 06:44:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.803 06:44:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:57.803 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:58.062 [2024-10-04 06:44:50.510410] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 22.11.4 initialization... 00:26:58.062 [2024-10-04 06:44:50.510508] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.062 [2024-10-04 06:44:50.650669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.062 [2024-10-04 06:44:50.734520] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:58.062 [2024-10-04 06:44:50.735060] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.062 [2024-10-04 06:44:50.735243] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.062 [2024-10-04 06:44:50.735482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.062 [2024-10-04 06:44:50.735765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.062 [2024-10-04 06:44:50.735894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.062 [2024-10-04 06:44:50.735981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.062 [2024-10-04 06:44:50.735982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.998 06:44:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:58.998 06:44:51 -- common/autotest_common.sh@852 -- # return 0 00:26:58.998 06:44:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:58.998 06:44:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:58.998 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:58.998 06:44:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.998 06:44:51 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:58.998 06:44:51 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:58.998 06:44:51 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:58.998 06:44:51 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:58.998 06:44:51 -- scripts/common.sh@312 -- # local nvmes 00:26:58.998 06:44:51 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:58.998 06:44:51 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:58.998 06:44:51 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:58.998 06:44:51 -- scripts/common.sh@297 -- # local bdf= 00:26:58.998 06:44:51 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:58.998 06:44:51 -- scripts/common.sh@232 -- # local class 00:26:58.998 06:44:51 -- scripts/common.sh@233 -- # local subclass 00:26:58.998 06:44:51 -- scripts/common.sh@234 -- # local progif 00:26:58.998 06:44:51 -- scripts/common.sh@235 -- # printf %02x 1 00:26:58.999 06:44:51 -- scripts/common.sh@235 -- # class=01 00:26:58.999 06:44:51 -- scripts/common.sh@236 -- # printf %02x 8 00:26:58.999 06:44:51 -- scripts/common.sh@236 -- # subclass=08 00:26:58.999 06:44:51 -- scripts/common.sh@237 -- # printf %02x 2 00:26:58.999 06:44:51 -- scripts/common.sh@237 -- # progif=02 00:26:58.999 06:44:51 -- scripts/common.sh@239 -- # hash lspci 00:26:58.999 06:44:51 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:58.999 06:44:51 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:58.999 06:44:51 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:58.999 06:44:51 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:58.999 06:44:51 -- scripts/common.sh@244 -- # tr -d '"' 00:26:58.999 06:44:51 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:58.999 06:44:51 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:58.999 06:44:51 -- scripts/common.sh@15 -- # local i 00:26:58.999 06:44:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:58.999 06:44:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:58.999 06:44:51 -- scripts/common.sh@24 -- # return 0 00:26:58.999 06:44:51 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:58.999 06:44:51 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:58.999 06:44:51 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:58.999 06:44:51 -- scripts/common.sh@15 -- # local i 00:26:58.999 06:44:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:58.999 06:44:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:58.999 06:44:51 -- scripts/common.sh@24 -- # return 0 00:26:58.999 06:44:51 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:58.999 06:44:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:58.999 06:44:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:58.999 06:44:51 -- scripts/common.sh@322 -- # uname -s 00:26:58.999 06:44:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:58.999 06:44:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:58.999 06:44:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:58.999 06:44:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:58.999 06:44:51 -- scripts/common.sh@322 -- # uname -s 00:26:58.999 06:44:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:58.999 06:44:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:58.999 06:44:51 -- scripts/common.sh@327 -- # (( 2 )) 00:26:58.999 06:44:51 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:58.999 06:44:51 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:58.999 06:44:51 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:58.999 06:44:51 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:58.999 06:44:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:58.999 06:44:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:58.999 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:58.999 ************************************ 00:26:58.999 START TEST spdk_target_abort 00:26:58.999 ************************************ 00:26:58.999 06:44:51 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:58.999 06:44:51 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:58.999 06:44:51 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:58.999 06:44:51 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:58.999 06:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.999 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:59.257 spdk_targetn1 00:26:59.257 06:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.257 06:44:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.257 06:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.257 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:59.257 [2024-10-04 06:44:51.730348] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.257 06:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.257 06:44:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:59.257 06:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.257 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:59.257 06:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.257 06:44:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:59.257 06:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.257 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:59.257 06:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.257 06:44:51 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:59.257 06:44:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.257 06:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:59.257 [2024-10-04 06:44:51.758530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.257 06:44:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.257 06:44:51 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:59.258 06:44:51 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:02.544 Initializing NVMe Controllers 00:27:02.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:02.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:02.544 Initialization complete. Launching workers. 00:27:02.544 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10782, failed: 0 00:27:02.544 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1160, failed to submit 9622 00:27:02.544 success 750, unsuccess 410, failed 0 00:27:02.544 06:44:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:02.544 06:44:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:05.823 Initializing NVMe Controllers 00:27:05.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:05.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:05.824 Initialization complete. Launching workers. 00:27:05.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 6007, failed: 0 00:27:05.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1234, failed to submit 4773 00:27:05.824 success 276, unsuccess 958, failed 0 00:27:05.824 06:44:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:05.824 06:44:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:09.104 Initializing NVMe Controllers 00:27:09.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:09.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:09.104 Initialization complete. Launching workers. 00:27:09.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30870, failed: 0 00:27:09.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2624, failed to submit 28246 00:27:09.104 success 481, unsuccess 2143, failed 0 00:27:09.104 06:45:01 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:27:09.104 06:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.104 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.104 06:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.104 06:45:01 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:09.104 06:45:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.104 06:45:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.363 06:45:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.363 06:45:01 -- target/abort_qd_sizes.sh@62 -- # killprocess 102918 00:27:09.363 06:45:01 -- common/autotest_common.sh@926 -- # '[' -z 102918 ']' 00:27:09.363 06:45:01 -- common/autotest_common.sh@930 -- # kill -0 102918 00:27:09.363 06:45:01 -- common/autotest_common.sh@931 -- # uname 00:27:09.363 06:45:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:09.363 06:45:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102918 00:27:09.363 06:45:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:09.363 06:45:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:09.363 killing process with pid 102918 00:27:09.363 06:45:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102918' 00:27:09.363 06:45:02 -- common/autotest_common.sh@945 -- # kill 102918 00:27:09.363 06:45:02 -- common/autotest_common.sh@950 -- # wait 102918 00:27:09.621 00:27:09.621 real 0m10.643s 00:27:09.621 user 0m43.810s 00:27:09.621 sys 0m1.667s 00:27:09.621 06:45:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.621 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:27:09.621 ************************************ 00:27:09.621 END TEST spdk_target_abort 00:27:09.621 ************************************ 00:27:09.879 06:45:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:09.879 06:45:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:09.879 06:45:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:09.879 06:45:02 -- common/autotest_common.sh@10 -- # set +x 00:27:09.879 ************************************ 00:27:09.879 START TEST kernel_target_abort 00:27:09.879 ************************************ 00:27:09.879 06:45:02 -- common/autotest_common.sh@1104 -- # kernel_target 00:27:09.879 06:45:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:09.879 06:45:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:09.879 06:45:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:09.879 06:45:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:09.879 06:45:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:09.879 06:45:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:09.879 06:45:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:09.879 06:45:02 -- nvmf/common.sh@627 -- # local block nvme 00:27:09.879 06:45:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:09.879 06:45:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:09.879 06:45:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:09.879 06:45:02 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:10.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:10.138 Waiting for block devices as requested 00:27:10.138 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:10.396 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:10.396 06:45:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:10.396 06:45:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:10.396 06:45:02 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:10.396 06:45:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:10.396 06:45:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:10.396 No valid GPT data, bailing 00:27:10.396 06:45:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:10.396 06:45:02 -- scripts/common.sh@393 -- # pt= 00:27:10.396 06:45:02 -- scripts/common.sh@394 -- # return 1 00:27:10.396 06:45:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:10.396 06:45:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:10.396 06:45:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:10.396 06:45:02 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:10.396 06:45:02 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:10.396 06:45:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:10.396 No valid GPT data, bailing 00:27:10.396 06:45:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:10.396 06:45:03 -- scripts/common.sh@393 -- # pt= 00:27:10.396 06:45:03 -- scripts/common.sh@394 -- # return 1 00:27:10.396 06:45:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:10.396 06:45:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:10.396 06:45:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:10.396 06:45:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:10.396 06:45:03 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:10.396 06:45:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:10.654 No valid GPT data, bailing 00:27:10.654 06:45:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:10.654 06:45:03 -- scripts/common.sh@393 -- # pt= 00:27:10.654 06:45:03 -- scripts/common.sh@394 -- # return 1 00:27:10.654 06:45:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:10.654 06:45:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:10.654 06:45:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:10.654 06:45:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:10.654 06:45:03 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:10.654 06:45:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:10.654 No valid GPT data, bailing 00:27:10.654 06:45:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:10.654 06:45:03 -- scripts/common.sh@393 -- # pt= 00:27:10.654 06:45:03 -- scripts/common.sh@394 -- # return 1 00:27:10.654 06:45:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:10.654 06:45:03 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:10.654 06:45:03 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:10.654 06:45:03 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:10.654 06:45:03 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:10.654 06:45:03 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:10.654 06:45:03 -- nvmf/common.sh@654 -- # echo 1 00:27:10.654 06:45:03 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:10.654 06:45:03 -- nvmf/common.sh@656 -- # echo 1 00:27:10.654 06:45:03 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:10.654 06:45:03 -- nvmf/common.sh@663 -- # echo tcp 00:27:10.654 06:45:03 -- nvmf/common.sh@664 -- # echo 4420 00:27:10.654 06:45:03 -- nvmf/common.sh@665 -- # echo ipv4 00:27:10.655 06:45:03 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:10.655 06:45:03 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f42e5a1-26bb-46bf-ae07-cde84d5ca87c --hostid=9f42e5a1-26bb-46bf-ae07-cde84d5ca87c -a 10.0.0.1 -t tcp -s 4420 00:27:10.655 00:27:10.655 Discovery Log Number of Records 2, Generation counter 2 00:27:10.655 =====Discovery Log Entry 0====== 00:27:10.655 trtype: tcp 00:27:10.655 adrfam: ipv4 00:27:10.655 subtype: current discovery subsystem 00:27:10.655 treq: not specified, sq flow control disable supported 00:27:10.655 portid: 1 00:27:10.655 trsvcid: 4420 00:27:10.655 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:10.655 traddr: 10.0.0.1 00:27:10.655 eflags: none 00:27:10.655 sectype: none 00:27:10.655 =====Discovery Log Entry 1====== 00:27:10.655 trtype: tcp 00:27:10.655 adrfam: ipv4 00:27:10.655 subtype: nvme subsystem 00:27:10.655 treq: not specified, sq flow control disable supported 00:27:10.655 portid: 1 00:27:10.655 trsvcid: 4420 00:27:10.655 subnqn: kernel_target 00:27:10.655 traddr: 10.0.0.1 00:27:10.655 eflags: none 00:27:10.655 sectype: none 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:10.655 06:45:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:13.940 Initializing NVMe Controllers 00:27:13.940 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:13.940 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:13.940 Initialization complete. Launching workers. 00:27:13.940 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 36268, failed: 0 00:27:13.940 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 36268, failed to submit 0 00:27:13.940 success 0, unsuccess 36268, failed 0 00:27:13.940 06:45:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:13.940 06:45:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:17.228 Initializing NVMe Controllers 00:27:17.228 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:17.228 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:17.228 Initialization complete. Launching workers. 00:27:17.228 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 79052, failed: 0 00:27:17.228 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33426, failed to submit 45626 00:27:17.228 success 0, unsuccess 33426, failed 0 00:27:17.228 06:45:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:17.228 06:45:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:20.544 Initializing NVMe Controllers 00:27:20.544 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:20.544 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:20.544 Initialization complete. Launching workers. 00:27:20.544 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 90259, failed: 0 00:27:20.544 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22502, failed to submit 67757 00:27:20.544 success 0, unsuccess 22502, failed 0 00:27:20.544 06:45:12 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:20.544 06:45:12 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:20.544 06:45:12 -- nvmf/common.sh@677 -- # echo 0 00:27:20.544 06:45:12 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:20.544 06:45:12 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:20.544 06:45:12 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:20.544 06:45:12 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:20.544 06:45:12 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:20.544 06:45:12 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:20.544 ************************************ 00:27:20.544 END TEST kernel_target_abort 00:27:20.544 ************************************ 00:27:20.544 00:27:20.544 real 0m10.465s 00:27:20.544 user 0m5.279s 00:27:20.544 sys 0m2.362s 00:27:20.544 06:45:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.544 06:45:12 -- common/autotest_common.sh@10 -- # set +x 00:27:20.544 06:45:12 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:20.544 06:45:12 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:20.544 06:45:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:20.544 06:45:12 -- nvmf/common.sh@116 -- # sync 00:27:20.544 06:45:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:20.544 06:45:12 -- nvmf/common.sh@119 -- # set +e 00:27:20.544 06:45:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:20.544 06:45:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:20.544 rmmod nvme_tcp 00:27:20.544 rmmod nvme_fabrics 00:27:20.544 rmmod nvme_keyring 00:27:20.544 06:45:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:20.544 06:45:12 -- nvmf/common.sh@123 -- # set -e 00:27:20.544 06:45:12 -- nvmf/common.sh@124 -- # return 0 00:27:20.544 Process with pid 102918 is not found 00:27:20.544 06:45:12 -- nvmf/common.sh@477 -- # '[' -n 102918 ']' 00:27:20.544 06:45:12 -- nvmf/common.sh@478 -- # killprocess 102918 00:27:20.544 06:45:12 -- common/autotest_common.sh@926 -- # '[' -z 102918 ']' 00:27:20.544 06:45:12 -- common/autotest_common.sh@930 -- # kill -0 102918 00:27:20.544 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (102918) - No such process 00:27:20.544 06:45:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 102918 is not found' 00:27:20.544 06:45:12 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:20.544 06:45:12 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:21.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:21.112 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:21.112 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:21.112 06:45:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:21.112 06:45:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:21.112 06:45:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.112 06:45:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:21.112 06:45:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.112 06:45:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:21.112 06:45:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.112 06:45:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:21.113 ************************************ 00:27:21.113 END TEST nvmf_abort_qd_sizes 00:27:21.113 ************************************ 00:27:21.113 00:27:21.113 real 0m24.663s 00:27:21.113 user 0m50.495s 00:27:21.113 sys 0m5.367s 00:27:21.113 06:45:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.113 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:27:21.113 06:45:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:21.113 06:45:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:21.113 06:45:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:21.113 06:45:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:21.113 06:45:13 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:21.113 06:45:13 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:21.113 06:45:13 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:21.113 06:45:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:21.113 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:27:21.113 06:45:13 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:21.113 06:45:13 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:27:21.113 06:45:13 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:27:21.113 06:45:13 -- common/autotest_common.sh@10 -- # set +x 00:27:23.025 INFO: APP EXITING 00:27:23.025 INFO: killing all VMs 00:27:23.025 INFO: killing vhost app 00:27:23.025 INFO: EXIT DONE 00:27:23.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:23.593 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:23.593 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:24.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:24.160 Cleaning 00:27:24.160 Removing: /var/run/dpdk/spdk0/config 00:27:24.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:24.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:24.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:24.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:24.160 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:24.160 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:24.160 Removing: /var/run/dpdk/spdk1/config 00:27:24.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:24.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:24.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:24.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:24.160 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:24.160 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:24.160 Removing: /var/run/dpdk/spdk2/config 00:27:24.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:24.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:24.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:24.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:24.160 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:24.160 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:24.160 Removing: /var/run/dpdk/spdk3/config 00:27:24.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:24.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:24.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:24.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:24.160 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:24.160 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:24.160 Removing: /var/run/dpdk/spdk4/config 00:27:24.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:24.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:24.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:24.420 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:24.420 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:24.420 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:24.420 Removing: /dev/shm/nvmf_trace.0 00:27:24.420 Removing: /dev/shm/spdk_tgt_trace.pid67335 00:27:24.420 Removing: /var/run/dpdk/spdk0 00:27:24.420 Removing: /var/run/dpdk/spdk1 00:27:24.420 Removing: /var/run/dpdk/spdk2 00:27:24.420 Removing: /var/run/dpdk/spdk3 00:27:24.420 Removing: /var/run/dpdk/spdk4 00:27:24.420 Removing: /var/run/dpdk/spdk_pid100113 00:27:24.420 Removing: /var/run/dpdk/spdk_pid100398 00:27:24.420 Removing: /var/run/dpdk/spdk_pid100696 00:27:24.420 Removing: /var/run/dpdk/spdk_pid101254 00:27:24.420 Removing: /var/run/dpdk/spdk_pid101259 00:27:24.420 Removing: /var/run/dpdk/spdk_pid101630 00:27:24.420 Removing: /var/run/dpdk/spdk_pid101790 00:27:24.420 Removing: /var/run/dpdk/spdk_pid101947 00:27:24.420 Removing: /var/run/dpdk/spdk_pid102044 00:27:24.420 Removing: /var/run/dpdk/spdk_pid102199 00:27:24.420 Removing: /var/run/dpdk/spdk_pid102308 00:27:24.420 Removing: /var/run/dpdk/spdk_pid102987 00:27:24.420 Removing: /var/run/dpdk/spdk_pid103021 00:27:24.420 Removing: /var/run/dpdk/spdk_pid103052 00:27:24.420 Removing: /var/run/dpdk/spdk_pid103303 00:27:24.420 Removing: /var/run/dpdk/spdk_pid103338 00:27:24.420 Removing: /var/run/dpdk/spdk_pid103369 00:27:24.420 Removing: /var/run/dpdk/spdk_pid67191 00:27:24.420 Removing: /var/run/dpdk/spdk_pid67335 00:27:24.420 Removing: /var/run/dpdk/spdk_pid67641 00:27:24.420 Removing: /var/run/dpdk/spdk_pid67910 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68085 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68160 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68246 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68340 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68373 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68403 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68469 00:27:24.420 Removing: /var/run/dpdk/spdk_pid68581 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69207 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69271 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69340 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69368 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69447 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69475 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69559 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69587 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69639 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69669 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69726 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69756 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69896 00:27:24.420 Removing: /var/run/dpdk/spdk_pid69937 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70007 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70082 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70101 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70165 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70179 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70219 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70233 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70273 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70287 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70329 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70343 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70385 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70399 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70438 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70453 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70488 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70508 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70543 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70562 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70597 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70616 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70651 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70670 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70705 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70719 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70759 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70773 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70813 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70827 00:27:24.420 Removing: /var/run/dpdk/spdk_pid70867 00:27:24.679 Removing: /var/run/dpdk/spdk_pid70881 00:27:24.679 Removing: /var/run/dpdk/spdk_pid70921 00:27:24.679 Removing: /var/run/dpdk/spdk_pid70935 00:27:24.679 Removing: /var/run/dpdk/spdk_pid70975 00:27:24.679 Removing: /var/run/dpdk/spdk_pid70989 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71024 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71046 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71084 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71106 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71144 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71163 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71198 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71217 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71253 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71322 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71413 00:27:24.679 Removing: /var/run/dpdk/spdk_pid71827 00:27:24.679 Removing: /var/run/dpdk/spdk_pid78733 00:27:24.679 Removing: /var/run/dpdk/spdk_pid79074 00:27:24.679 Removing: /var/run/dpdk/spdk_pid81528 00:27:24.680 Removing: /var/run/dpdk/spdk_pid81904 00:27:24.680 Removing: /var/run/dpdk/spdk_pid82165 00:27:24.680 Removing: /var/run/dpdk/spdk_pid82211 00:27:24.680 Removing: /var/run/dpdk/spdk_pid82523 00:27:24.680 Removing: /var/run/dpdk/spdk_pid82573 00:27:24.680 Removing: /var/run/dpdk/spdk_pid82953 00:27:24.680 Removing: /var/run/dpdk/spdk_pid83477 00:27:24.680 Removing: /var/run/dpdk/spdk_pid83912 00:27:24.680 Removing: /var/run/dpdk/spdk_pid84869 00:27:24.680 Removing: /var/run/dpdk/spdk_pid85847 00:27:24.680 Removing: /var/run/dpdk/spdk_pid85971 00:27:24.680 Removing: /var/run/dpdk/spdk_pid86032 00:27:24.680 Removing: /var/run/dpdk/spdk_pid87500 00:27:24.680 Removing: /var/run/dpdk/spdk_pid87730 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88172 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88284 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88437 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88478 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88528 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88569 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88740 00:27:24.680 Removing: /var/run/dpdk/spdk_pid88887 00:27:24.680 Removing: /var/run/dpdk/spdk_pid89151 00:27:24.680 Removing: /var/run/dpdk/spdk_pid89274 00:27:24.680 Removing: /var/run/dpdk/spdk_pid89679 00:27:24.680 Removing: /var/run/dpdk/spdk_pid90061 00:27:24.680 Removing: /var/run/dpdk/spdk_pid90067 00:27:24.680 Removing: /var/run/dpdk/spdk_pid92305 00:27:24.680 Removing: /var/run/dpdk/spdk_pid92608 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93097 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93100 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93440 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93454 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93474 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93499 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93506 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93655 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93657 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93765 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93767 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93875 00:27:24.680 Removing: /var/run/dpdk/spdk_pid93883 00:27:24.680 Removing: /var/run/dpdk/spdk_pid94360 00:27:24.680 Removing: /var/run/dpdk/spdk_pid94403 00:27:24.680 Removing: /var/run/dpdk/spdk_pid94560 00:27:24.680 Removing: /var/run/dpdk/spdk_pid94677 00:27:24.680 Removing: /var/run/dpdk/spdk_pid95068 00:27:24.680 Removing: /var/run/dpdk/spdk_pid95322 00:27:24.680 Removing: /var/run/dpdk/spdk_pid95820 00:27:24.680 Removing: /var/run/dpdk/spdk_pid96377 00:27:24.680 Removing: /var/run/dpdk/spdk_pid96840 00:27:24.680 Removing: /var/run/dpdk/spdk_pid96935 00:27:24.680 Removing: /var/run/dpdk/spdk_pid97020 00:27:24.680 Removing: /var/run/dpdk/spdk_pid97098 00:27:24.680 Removing: /var/run/dpdk/spdk_pid97246 00:27:24.680 Removing: /var/run/dpdk/spdk_pid97332 00:27:24.680 Removing: /var/run/dpdk/spdk_pid97422 00:27:24.680 Removing: /var/run/dpdk/spdk_pid97514 00:27:24.680 Removing: /var/run/dpdk/spdk_pid97844 00:27:24.680 Removing: /var/run/dpdk/spdk_pid98545 00:27:24.680 Removing: /var/run/dpdk/spdk_pid99906 00:27:24.680 Clean 00:27:24.939 killing process with pid 61520 00:27:24.939 killing process with pid 61526 00:27:24.939 06:45:17 -- common/autotest_common.sh@1436 -- # return 0 00:27:24.939 06:45:17 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:24.939 06:45:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:24.939 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 06:45:17 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:24.939 06:45:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:24.939 06:45:17 -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 06:45:17 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:24.939 06:45:17 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:24.939 06:45:17 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:24.939 06:45:17 -- spdk/autotest.sh@394 -- # hash lcov 00:27:24.939 06:45:17 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:24.939 06:45:17 -- spdk/autotest.sh@396 -- # hostname 00:27:24.939 06:45:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:25.197 geninfo: WARNING: invalid characters removed from testname! 00:27:47.176 06:45:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:47.176 06:45:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:49.708 06:45:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:51.620 06:45:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:53.524 06:45:45 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:55.424 06:45:48 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:57.957 06:45:50 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:57.957 06:45:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:57.957 06:45:50 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:57.957 06:45:50 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.957 06:45:50 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.957 06:45:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.957 06:45:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.957 06:45:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.957 06:45:50 -- paths/export.sh@5 -- $ export PATH 00:27:57.957 06:45:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.957 06:45:50 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:57.957 06:45:50 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:57.957 06:45:50 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728024350.XXXXXX 00:27:57.957 06:45:50 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728024350.ZWpMah 00:27:57.957 06:45:50 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:57.957 06:45:50 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:27:57.957 06:45:50 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:57.957 06:45:50 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:57.957 06:45:50 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:57.957 06:45:50 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:57.957 06:45:50 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:57.957 06:45:50 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:57.957 06:45:50 -- common/autotest_common.sh@10 -- $ set +x 00:27:57.957 06:45:50 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:57.957 06:45:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:57.957 06:45:50 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:57.957 06:45:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:57.957 06:45:50 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:57.957 06:45:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:57.957 06:45:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:57.957 06:45:50 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:57.957 06:45:50 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:57.957 06:45:50 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:57.957 06:45:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:57.957 + [[ -n 5974 ]] 00:27:57.957 + sudo kill 5974 00:27:57.967 [Pipeline] } 00:27:57.986 [Pipeline] // timeout 00:27:57.992 [Pipeline] } 00:27:58.007 [Pipeline] // stage 00:27:58.013 [Pipeline] } 00:27:58.028 [Pipeline] // catchError 00:27:58.038 [Pipeline] stage 00:27:58.040 [Pipeline] { (Stop VM) 00:27:58.054 [Pipeline] sh 00:27:58.337 + vagrant halt 00:28:02.556 ==> default: Halting domain... 00:28:09.130 [Pipeline] sh 00:28:09.404 + vagrant destroy -f 00:28:12.685 ==> default: Removing domain... 00:28:12.698 [Pipeline] sh 00:28:12.978 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:12.988 [Pipeline] } 00:28:13.006 [Pipeline] // stage 00:28:13.012 [Pipeline] } 00:28:13.029 [Pipeline] // dir 00:28:13.034 [Pipeline] } 00:28:13.052 [Pipeline] // wrap 00:28:13.058 [Pipeline] } 00:28:13.074 [Pipeline] // catchError 00:28:13.085 [Pipeline] stage 00:28:13.087 [Pipeline] { (Epilogue) 00:28:13.103 [Pipeline] sh 00:28:13.388 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:18.666 [Pipeline] catchError 00:28:18.668 [Pipeline] { 00:28:18.679 [Pipeline] sh 00:28:18.958 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:19.217 Artifacts sizes are good 00:28:19.226 [Pipeline] } 00:28:19.241 [Pipeline] // catchError 00:28:19.253 [Pipeline] archiveArtifacts 00:28:19.260 Archiving artifacts 00:28:19.430 [Pipeline] cleanWs 00:28:19.452 [WS-CLEANUP] Deleting project workspace... 00:28:19.452 [WS-CLEANUP] Deferred wipeout is used... 00:28:19.521 [WS-CLEANUP] done 00:28:19.523 [Pipeline] } 00:28:19.539 [Pipeline] // stage 00:28:19.544 [Pipeline] } 00:28:19.557 [Pipeline] // node 00:28:19.563 [Pipeline] End of Pipeline 00:28:19.611 Finished: SUCCESS